Websocket Protocol

TrueNAS uses DDP: https://github.com/meteor/meteor/blob/devel/packages/ddp/DDP.md .

DDP (Distributed Data Protocol) is the stateful websocket protocol to communicate between the client and the server.

Websocket endpoint: /websocket

e.g. ws://truenas.domain/websocket

Example of connection

Client connects to websocket endpoint and sends a connect message.

{
  "msg": "connect",
  "version": "1",
  "support": ["1"]
}

Server answers with either connected or failed.

{
  "msg": "connected",
  "session": "b4a4d164-6bc7-11e6-8a93-00e04d680384"
}

Authentication

Authentication happens by calling the auth.login method.

Request:

{
  "id": "d8e715be-6bc7-11e6-8c28-00e04d680384",
  "msg": "method",
  "method": "auth.login",
  "params": ["username", "password"]
}

Response:

{
  "id": "d8e715be-6bc7-11e6-8c28-00e04d680384",
  "msg": "result",
  "result": true,
}

acme.dns.authenticator

acme.dns.authenticator.authenticator_schemas

Get the schemas for all DNS providers we support for ACME DNS Challenge and the respective attributes required for connecting to them while validating a DNS Challenge

acme.dns.authenticator.create
Arguments:
{ "type": "object", "properties": { "authenticator": { "type": "string" }, "name": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "dns_authenticator_create", "default": {} }

Create a DNS Authenticator

Create a specific DNS Authenticator containing required authentication details for the said provider to successfully connect with it

Create a DNS Authenticator for Route53

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "acme.dns.authenticator.create",
    "params": [{
        "name": "route53_authenticator",
        "authenticator": "route53",
        "attributes": {
            "access_key_id": "AQX13",
            "secret_access_key": "JKW90"
        }
    }]
}
acme.dns.authenticator.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete DNS Authenticator of id

Delete a DNS Authenticator of id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "acme.dns.authenticator.delete",
    "params": [
        1
    ]
}
acme.dns.authenticator.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
acme.dns.authenticator.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "dns_authenticator_update", "default": {} }

Update DNS Authenticator of id

Update a DNS Authenticator of id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "acme.dns.authenticator.update",
    "params": [
        1,
        {
            "name": "route53_authenticator",
            "attributes": {
                "access_key_id": "AQX13",
                "secret_access_key": "JKW90"
            }
        }
    ]
}

activedirectory

activedirectory.change_trust_account_pw

Force an update of the AD machine account password. This can be used to refresh the Kerberos principals in the server's system keytab.

activedirectory.config
-
activedirectory.domain_info

Returns the following information about the currently joined domain:

LDAP server IP address of current LDAP server to which TrueNAS is connected.

LDAP server name DNS name of LDAP server to which TrueNAS is connected

Realm Kerberos realm

LDAP port

Server time timestamp.

KDC server Kerberos KDC to which TrueNAS is connected

Server time offset current time offset from DC.

Last machine account password change. timestamp

activedirectory.get_spn_list

Return list of kerberos SPN entries registered for the server's Active Directory computer account. This may not reflect the state of the server's current kerberos keytab.

activedirectory.get_state

Wrapper function for 'directoryservices.get_state'. Returns only the state of the Active Directory service.

activedirectory.leave
Arguments:
{ "type": "object", "properties": { "username": { "type": "string" }, "password": { "type": "string" } }, "additionalProperties": false, "title": "leave_ad", "default": {} }

Leave Active Directory domain. This will remove computer object from AD and clear relevant configuration data from the NAS. This requires credentials for appropriately-privileged user. Credentials are used to obtain a kerberos ticket, which is used to perform the actual removal from the domain.

activedirectory.nss_info_choices

Returns list of available LDAP schema choices.

activedirectory.started

Issue a no-effect command to our DC. This checks if our secure channel connection to our domain controller is still alive. It has much less impact than wbinfo -t. Default winbind request timeout is 60 seconds, and can be adjusted by the smb4.conf parameter 'winbind request timeout ='

activedirectory.update
Arguments:
{ "type": "object", "properties": { "domainname": { "type": "string" }, "bindname": { "type": "string" }, "bindpw": { "type": "string" }, "verbose_logging": { "type": "boolean" }, "use_default_domain": { "type": "boolean" }, "allow_trusted_doms": { "type": "boolean" }, "allow_dns_updates": { "type": "boolean" }, "disable_freenas_cache": { "type": "boolean" }, "restrict_pam": { "type": "boolean" }, "site": { "type": [ "string", "null" ] }, "kerberos_realm": { "type": [ "integer", "null" ] }, "kerberos_principal": { "type": [ "string", "null" ] }, "timeout": { "type": "integer" }, "dns_timeout": { "type": "integer" }, "nss_info": { "type": [ "string", "null" ], "enum": [ "SFU", "SFU20", "RFC2307" ] }, "createcomputer": { "type": "string" }, "netbiosname": { "type": "string" }, "netbiosname_b": { "type": "string" }, "netbiosalias": { "type": "array", "items": [ { "type": "null" } ] }, "enable": { "type": "boolean" } }, "additionalProperties": false, "title": "activedirectory_update", "default": {} }

Update active directory configuration. domainname full DNS domain name of the Active Directory domain.

bindname username used to perform the intial domain join.

bindpw password used to perform the initial domain join. User- provided credentials are used to obtain a kerberos ticket, which is used to perform the actual domain join.

verbose_logging increase logging during the domain join process.

use_default_domain controls whether domain users and groups have the pre-windows 2000 domain name prepended to the user account. When enabled, the user appears as "administrator" rather than "EXAMPLEdministrator"

allow_trusted_doms enable support for trusted domains. If this parameter is enabled, then separate idmap backends must be configured for each trusted domain, and the idmap cache should be cleared.

allow_dns_updates during the domain join process, automatically generate DNS entries in the AD domain for the NAS. If this is disabled, then a domain administrator must manually add appropriate DNS entries for the NAS. This parameter is recommended for TrueNAS HA servers.

disable_freenas_cache disables active caching of AD users and groups. When disabled, only users cached in winbind's internal cache are visible in GUI dropdowns. Disabling active caching is recommended in environments with a large amount of users.

site AD site of which the NAS is a member. This parameter is auto- detected during the domain join process. If no AD site is configured for the subnet in which the NAS is configured, then this parameter appears as 'Default-First-Site-Name'. Auto-detection is only performed during the initial domain join.

kerberos_realm in which the server is located. This parameter is automatically populated during the initial domain join. If the NAS has an AD site configured and that site has multiple kerberos servers, then the kerberos realm is automatically updated with a site-specific configuration to use those servers. Auto-detection is only performed during initial domain join.

kerberos_principal kerberos principal to use for AD-related operations outside of Samba. After intial domain join, this field is updated with the kerberos principal associated with the AD machine account for the NAS.

nss_info controls how Winbind retrieves Name Service Information to construct a user's home directory and login shell. This parameter is only effective if the Active Directory Domain Controller supports the Microsoft Services for Unix (SFU) LDAP schema.

timeout timeout value for winbind-related operations. This value may need to be increased in environments with high latencies for communications with domain controllers or a large number of domain controllers. Lowering the value may cause status checks to fail.

dns_timeout timeout value for DNS queries during the initial domain join. This value is also set as the NETWORK_TIMEOUT in the ldap config file.

createcomputer Active Directory Organizational Unit in which new computer accounts are created.

The OU string is read from top to bottom without RDNs. Slashes ("/") are used as delimiters, like Computers/Servers/NAS. The backslash ("\") is used to escape characters but not as a separator. Backslashes are interpreted at multiple levels and might require doubling or even quadrupling to take effect.

When this field is blank, new computer accounts are created in the Active Directory default OU.

The Active Directory service is started after a configuration update if the service was initially disabled, and the updated configuration sets enable to True. The Active Directory service is stopped if enable is changed to False. If the configuration is updated, but the initial enable state is True, and remains unchanged, then the samba server is only restarted.

During the domain join, a kerberos keytab for the newly-created AD machine account is generated. It is used for all future LDAP / AD interaction and the user-provided credentials are removed.

afp

afp.bindip_choices

List of valid choices for IP addresses to which to bind the AFP service.

afp.config
-
afp.update
Arguments:
{ "type": "object", "properties": { "guest": { "type": "boolean" }, "guest_user": { "type": "string" }, "bindip": { "type": "array", "items": [ { "type": "string" } ] }, "connections_limit": { "type": "integer" }, "dbpath": { "type": "string" }, "global_aux": { "type": "string" }, "map_acls": { "type": "string", "enum": [ "RIGHTS", "MODE", "NONE" ] }, "chmod_request": { "type": "string", "enum": [ "PRESERVE", "SIMPLE", "IGNORE" ] }, "loglevel": { "type": "string", "enum": [ "NONE", "MINIMUM", "NORMAL", "FULL", "DEBUG" ] } }, "additionalProperties": false, "title": "afp_update", "default": {} }

Update AFP service settings.

bindip is a list of IPs to bind AFP to. Leave blank (empty list) to bind to all available IPs.

map_acls defines how to map the effective permissions of authenticated users. RIGHTS - Unix-style permissions MODE - ACLs NONE - Do not map

chmod_request defines advanced permission control that deals with ACLs. PRESERVE - Preserve ZFS ACEs for named users and groups or POSIX ACL group mask SIMPLE - Change permission as requested without any extra steps IGNORE - Permission change requests are ignored

alert

alert.dismiss
Arguments:
{ "title": "uuid", "type": "string" }

Dismiss id alert.

alert.list

List all types of alerts including active/dismissed currently in the system.

alert.list_categories

List all types of alerts which the system can issue.

alert.list_policies

List all alert policies which indicate the frequency of the alerts.

alert.restore
Arguments:
{ "title": "uuid", "type": "string" }

Restore id alert which had been dismissed.

alertclasses

alertclasses.config
-
alertclasses.update
Arguments:
{ "type": "object", "properties": { "classes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "alert_classes_update", "default": {} }

Update default Alert settings.

alertservice

alertservice.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "type": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "level": { "type": "string", "enum": [ "INFO", "NOTICE", "WARNING", "ERROR", "CRITICAL", "ALERT", "EMERGENCY" ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "alert_service_create", "default": {} }

Create an Alert Service of specified type.

If enabled, it sends alerts to the configured type of Alert Service.

Create an Alert Service of Mail type

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "alertservice.create",
    "params": [{
        "name": "Test Email Alert",
        "enabled": true,
        "type": "Mail",
        "attributes": {
            "email": "dev@ixsystems.com"
        },
        "settings": {
            "VolumeVersion": "HOURLY"
        }
    }]
}
alertservice.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete Alert Service of id.

alertservice.list_types

List all types of supported Alert services which can be configured with the system.

alertservice.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
alertservice.test
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "type": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "level": { "type": "string", "enum": [ "INFO", "NOTICE", "WARNING", "ERROR", "CRITICAL", "ALERT", "EMERGENCY" ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "alert_service_create", "default": {} }

Send a test alert using type of Alert Service.

Send a test alert using Alert Service of Mail type.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "alertservice.test",
    "params": [{
        "name": "Test Email Alert",
        "enabled": true,
        "type": "Mail",
        "attributes": {
            "email": "dev@ixsystems.com"
        },
        "settings": {}
    }]
}
alertservice.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "type": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "level": { "type": "string", "enum": [ "INFO", "NOTICE", "WARNING", "ERROR", "CRITICAL", "ALERT", "EMERGENCY" ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "alert_service_create", "default": {} }

Update Alert Service of id.

api_key

api_key.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" } }, "additionalProperties": false, "title": "api_key_create", "default": {} }

Creates API Key.

name is a user-readable name for key.

api_key.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete API Key id.

api_key.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
api_key.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "reset": { "type": "boolean" } }, "additionalProperties": false, "title": "api_key_create", "default": {} }

Update API Key id.

Specify reset: true to reset this API Key.

auth

auth.check_user
Arguments:
{ "title": "username", "type": "string" }
{ "title": "password", "type": "string" }

Verify username and password

auth.generate_token
Arguments:
{ "type": [ "integer", "null" ], "title": "ttl", "default": 600 }
{ "type": "object", "properties": {}, "additionalProperties": true, "title": "attrs", "default": {} }

Generate a token to be used for authentication.

ttl stands for Time To Live, in seconds. The token will be invalidated if the connection has been inactive for a time greater than this.

attrs is a general purpose object/dictionary to hold information about the token.

auth.login
Arguments:
{ "title": "username", "type": "string" }
{ "title": "password", "type": "string" }
{ "title": "otp_token", "default": null, "type": [ "string", "null" ] }

Authenticate session using username and password. Currently only root user is allowed. otp_token must be specified if two factor authentication is enabled.

auth.login_with_api_key
Arguments:
{ "title": "api_key", "type": "string" }

Authenticate session using API Key.

auth.logout

Deauthenticates an app and if a token exists, removes that from the session.

auth.sessions
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Returns list of active auth sessions.

Example of return value:

[ { "id": "NyhB1J5vjPjIV82yZ6caU12HLA1boDJcZNWuVQM4hQWuiyUWMGZTz2ElDp7Yk87d", "origin": "192.168.0.3:40392", "credentials": "TOKEN", "internal": False, "created_at": {"$date": 1545842426070} } ]

credentials can be UNIX_SOCKET, ROOT_TCP_SOCKET, TRUENAS_NODE, LOGIN_PASSWORD or TOKEN, depending on what authentication method was used.

If you want to exclude all internal connections from the list, call this method with following arguments:

[ [ ["internal", "=", True] ] ]

auth.token
Arguments:
{ "title": "token", "type": "string" }

Authenticate using a given token id.

auth.two_factor_auth

Returns true if two factor authorization is required for authorizing user's login.

auth.twofactor

auth.twofactor.config
-
auth.twofactor.provisioning_uri

Returns the provisioning URI for the OTP. This can then be encoded in a QR Code and used to provision an OTP app like Google Authenticator.

auth.twofactor.renew_secret

Generates a new secret for Two Factor Authentication. Returns boolean true on success.

auth.twofactor.update
Arguments:
{ "type": "object", "properties": { "enabled": { "type": "boolean" }, "otp_digits": { "type": "integer" }, "window": { "type": "integer" }, "interval": { "type": "integer" }, "services": { "type": "object", "properties": { "ssh": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "auth_twofactor_update", "default": {} }

otp_digits represents number of allowed digits in the OTP.

window extends the validity to window many counter ticks before and after the current one.

interval is time duration in seconds specifying OTP expiration time from it's creation time.

auth.twofactor.verify
Arguments:
{ "title": "token", "type": [ "string", "null" ] }

Returns boolean true if provided token is successfully authenticated.

boot

boot.attach
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "dev", "type": "string" }
{ "type": "object", "properties": { "expand": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Attach a disk to the boot pool, turning a stripe into a mirror.

expand option will determine whether the new disk partition will be the maximum available or the same size as the current disk.

boot.detach
Arguments:
{ "title": "dev", "type": "string" }

Detach given dev from boot pool.

boot.get_disks

Returns disks of the boot pool.

boot.get_scrub_interval

Get Automatic Scrub Interval value in days.

boot.get_state

Returns the current state of the boot pool, including all vdevs, properties and datasets.

boot.replace
Arguments:
{ "title": "label", "type": "string" }
{ "title": "dev", "type": "string" }

Replace device label on boot pool with dev.

boot.scrub
Job This endpoint is a Job. Please refer to the Jobs section for details.

Scrub on boot pool.

boot.set_scrub_interval
Arguments:
{ "type": "integer", "title": "interval" }

Set Automatic Scrub Interval value in days.

bootenv

bootenv.activate
Arguments:
{ "title": "id", "type": "string" }

Activates boot environment id.

bootenv.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "source": { "type": "string" } }, "additionalProperties": false, "title": "bootenv_create", "default": {} }

Create a new boot environment using name.

If a new boot environment is desired which is a clone of another boot environment, source can be passed. Then, a new boot environment of name is created using boot environment source by cloning it.

Ensure that name and source are valid boot environment names.

bootenv.delete
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "id", "type": "string" }

Delete id boot environment. This removes the clone from the system.

bootenv.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query all Boot Environments with query-filters and query-options.

bootenv.set_attribute
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "keep": { "type": "boolean" } }, "additionalProperties": false, "title": "attributes", "default": {} }

Sets attributes boot environment id.

Currently only keep attribute is allowed.

bootenv.update
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "name": { "type": "string" } }, "additionalProperties": false, "title": "bootenv_update", "default": {} }

Update id boot environment name with a new provided valid name.

certificate

certificate.acme_server_choices

Dictionary of popular ACME Servers with their directory URI endpoints which we display automatically in UI

certificate.country_choices

Returns country choices for creating a certificate/csr.

certificate.create
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "tos": { "type": "boolean" }, "dns_mapping": { "type": "object", "properties": {}, "additionalProperties": true }, "csr_id": { "type": "integer" }, "signedby": { "type": "integer" }, "key_length": { "type": "integer" }, "renew_days": { "type": "integer" }, "type": { "type": "integer" }, "lifetime": { "type": "integer" }, "serial": { "type": "integer" }, "acme_directory_uri": { "type": "string" }, "certificate": { "type": "string" }, "city": { "type": "string" }, "common": { "type": [ "string", "null" ] }, "country": { "type": "string" }, "CSR": { "type": "string" }, "ec_curve": { "type": "string", "enum": [ "BrainpoolP512R1", "BrainpoolP384R1", "BrainpoolP256R1", "SECP256K1", "ed25519" ] }, "email": { "type": "string" }, "key_type": { "type": "string", "enum": [ "RSA", "EC" ] }, "name": { "type": "string" }, "organization": { "type": "string" }, "organizational_unit": { "type": "string" }, "passphrase": { "type": "string" }, "privatekey": { "type": "string" }, "state": { "type": "string" }, "create_type": { "type": "string", "enum": [ "CERTIFICATE_CREATE_INTERNAL", "CERTIFICATE_CREATE_IMPORTED", "CERTIFICATE_CREATE_CSR", "CERTIFICATE_CREATE_IMPORTED_CSR", "CERTIFICATE_CREATE_ACME" ] }, "digest_algorithm": { "type": "string", "enum": [ "SHA1", "SHA224", "SHA256", "SHA384", "SHA512" ] }, "san": { "type": "array", "items": [ { "type": "string" } ] }, "cert_extensions": { "type": "object", "properties": { "BasicConstraints": { "type": "object", "properties": { "ca": { "type": "boolean" }, "enabled": { "type": "boolean" }, "path_length": { "type": [ "integer", "null" ] }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "AuthorityKeyIdentifier": { "type": "object", "properties": { "authority_cert_issuer": { "type": "boolean" }, "enabled": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "ExtendedKeyUsage": { "type": "object", "properties": { "usages": { "type": "array", "items": [ { "type": "string" } ] }, "enabled": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "KeyUsage": { "type": "object", "properties": { "enabled": { "type": "boolean" }, "digital_signature": { "type": "boolean" }, "content_commitment": { "type": "boolean" }, "key_encipherment": { "type": "boolean" }, "data_encipherment": { "type": "boolean" }, "key_agreement": { "type": "boolean" }, "key_cert_sign": { "type": "boolean" }, "crl_sign": { "type": "boolean" }, "encipher_only": { "type": "boolean" }, "decipher_only": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false } }, "additionalProperties": false, "title": "certificate_create", "default": {} }

Create a new Certificate

Certificates are classified under following types and the necessary keywords to be passed for create_type attribute to create the respective type of certificate

1) Internal Certificate - CERTIFICATE_CREATE_INTERNAL

2) Imported Certificate - CERTIFICATE_CREATE_IMPORTED

3) Certificate Signing Request - CERTIFICATE_CREATE_CSR

4) Imported Certificate Signing Request - CERTIFICATE_CREATE_IMPORTED_CSR

5) ACME Certificate - CERTIFICATE_CREATE_ACME

By default, created certs use RSA keys. If an Elliptic Curve Key is desired, it can be specified with the key_type attribute. If the ec_curve attribute is not specified for the Elliptic Curve Key, then default to using "BrainpoolP384R1" curve.

A type is selected by the Certificate Service based on create_type. The rest of the values in data are validated accordingly and finally a certificate is made based on the selected type.

cert_extensions can be specified to set X509v3 extensions.

Create an ACME based certificate

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificate.create",
    "params": [{
        "tos": true,
        "csr_id": 1,
        "acme_directory_uri": "https://acme-staging-v02.api.letsencrypt.org/directory",
        "name": "acme_certificate",
        "dns_mapping": {
            "domain1.com": "1"
        },
        "create_type": "CERTIFICATE_CREATE_ACME"
    }]
}

Create an Imported Certificate Signing Request

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificate.create",
    "params": [{
        "name": "csr",
        "CSR": "CSR string",
        "privatekey": "Private key string",
        "create_type": "CERTIFICATE_CREATE_IMPORTED_CSR"
    }]
}

Create an Internal Certificate

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificate.create",
    "params": [{
        "name": "internal_cert",
        "key_length": 2048,
        "lifetime": 3600,
        "city": "Nashville",
        "common": "domain1.com",
        "country": "US",
        "email": "dev@ixsystems.com",
        "organization": "iXsystems",
        "state": "Tennessee",
        "digest_algorithm": "SHA256",
        "signedby": 4,
        "create_type": "CERTIFICATE_CREATE_INTERNAL"
    }]
}
certificate.delete
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "boolean", "title": "force", "default": false }

Delete certificate of id.

If the certificate is an ACME based certificate, certificate service will try to revoke the certificate by updating it's status with the ACME server, if it fails an exception is raised and the certificate is not deleted from the system. However, if force is set to True, certificate is deleted from the system even if some error occurred while revoking the certificate with the ACME Server

Delete certificate of id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificate.delete",
    "params": [
        1,
        true
    ]
}
certificate.ec_curve_choices

Dictionary of supported EC curves.

certificate.extended_key_usage_choices

Dictionary of choices for ExtendedKeyUsage extension which can be passed over to usages attribute.

certificate.key_type_choices

Dictionary of supported key types for certificates.

certificate.profiles

Returns a dictionary of predefined options for specific use cases i.e openvpn client/server configurations which can be used for creating certificates.

certificate.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
certificate.update
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "revoked": { "type": "boolean" }, "name": { "type": "string" } }, "additionalProperties": false, "title": "certificate_update", "default": {} }

Update certificate of id

Only name and revoked attribute can be updated.

When revoked is enabled, the specified cert id is revoked and if it belongs to a CA chain which exists on this system, its serial number is added to the CA's certificate revocation list.

Update a certificate of id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificate.update",
    "params": [
        1,
        {
            "name": "updated_name"
        }
    ]
}

certificateauthority

certificateauthority.ca_sign_csr
Arguments:
{ "type": "object", "properties": { "ca_id": { "type": "integer" }, "csr_cert_id": { "type": "integer" }, "name": { "type": "string" }, "cert_extensions": { "type": "object", "properties": { "BasicConstraints": { "type": "object", "properties": { "ca": { "type": "boolean" }, "enabled": { "type": "boolean" }, "path_length": { "type": [ "integer", "null" ] }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "AuthorityKeyIdentifier": { "type": "object", "properties": { "authority_cert_issuer": { "type": "boolean" }, "enabled": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "ExtendedKeyUsage": { "type": "object", "properties": { "usages": { "type": "array", "items": [ { "type": "string" } ] }, "enabled": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "KeyUsage": { "type": "object", "properties": { "enabled": { "type": "boolean" }, "digital_signature": { "type": "boolean" }, "content_commitment": { "type": "boolean" }, "key_encipherment": { "type": "boolean" }, "data_encipherment": { "type": "boolean" }, "key_agreement": { "type": "boolean" }, "key_cert_sign": { "type": "boolean" }, "crl_sign": { "type": "boolean" }, "encipher_only": { "type": "boolean" }, "decipher_only": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false } }, "additionalProperties": false, "title": "ca_sign_csr", "default": {} }

Sign CSR by Certificate Authority of ca_id

Sign CSR's and generate a certificate from it. ca_id provides which CA is to be used for signing a CSR of csr_cert_id which exists in the system

cert_extensions can be specified if specific extensions are to be set in the newly signed certificate.

Sign CSR of csr_cert_id by Certificate Authority of ca_id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificateauthority.ca_sign_csr",
    "params": [{
        "ca_id": 1,
        "csr_cert_id": 1,
        "name": "signed_cert"
    }]
}
certificateauthority.create
Arguments:
{ "type": "object", "properties": { "tos": { "type": "boolean" }, "csr_id": { "type": "integer" }, "signedby": { "type": "integer" }, "key_length": { "type": "integer" }, "renew_days": { "type": "integer" }, "type": { "type": "integer" }, "lifetime": { "type": "integer" }, "serial": { "type": "integer" }, "acme_directory_uri": { "type": "string" }, "certificate": { "type": "string" }, "city": { "type": "string" }, "common": { "type": [ "string", "null" ] }, "country": { "type": "string" }, "CSR": { "type": "string" }, "ec_curve": { "type": "string", "enum": [ "BrainpoolP512R1", "BrainpoolP384R1", "BrainpoolP256R1", "SECP256K1", "ed25519" ] }, "email": { "type": "string" }, "key_type": { "type": "string", "enum": [ "RSA", "EC" ] }, "name": { "type": "string" }, "organization": { "type": "string" }, "organizational_unit": { "type": "string" }, "passphrase": { "type": "string" }, "privatekey": { "type": "string" }, "state": { "type": "string" }, "create_type": { "type": "string", "enum": [ "CA_CREATE_INTERNAL", "CA_CREATE_IMPORTED", "CA_CREATE_INTERMEDIATE" ] }, "digest_algorithm": { "type": "string", "enum": [ "SHA1", "SHA224", "SHA256", "SHA384", "SHA512" ] }, "san": { "type": "array", "items": [ { "type": "string" } ] }, "cert_extensions": { "type": "object", "properties": { "BasicConstraints": { "type": "object", "properties": { "ca": { "type": "boolean" }, "enabled": { "type": "boolean" }, "path_length": { "type": [ "integer", "null" ] }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "AuthorityKeyIdentifier": { "type": "object", "properties": { "authority_cert_issuer": { "type": "boolean" }, "enabled": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "ExtendedKeyUsage": { "type": "object", "properties": { "usages": { "type": "array", "items": [ { "type": "string" } ] }, "enabled": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false }, "KeyUsage": { "type": "object", "properties": { "enabled": { "type": "boolean" }, "digital_signature": { "type": "boolean" }, "content_commitment": { "type": "boolean" }, "key_encipherment": { "type": "boolean" }, "data_encipherment": { "type": "boolean" }, "key_agreement": { "type": "boolean" }, "key_cert_sign": { "type": "boolean" }, "crl_sign": { "type": "boolean" }, "encipher_only": { "type": "boolean" }, "decipher_only": { "type": "boolean" }, "extension_critical": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false } }, "additionalProperties": false, "title": "certificate_create", "default": {} }

Create a new Certificate Authority

Certificate Authorities are classified under following types with the necessary keywords to be passed for create_type attribute to create the respective type of certificate authority

1) Internal Certificate Authority - CA_CREATE_INTERNAL

2) Imported Certificate Authority - CA_CREATE_IMPORTED

3) Intermediate Certificate Authority - CA_CREATE_INTERMEDIATE

Created certificate authorities use RSA keys by default. If an Elliptic Curve Key is desired, then it can be specified with the key_type attribute. If the ec_curve attribute is not specified for the Elliptic Curve Key, default to using "BrainpoolP384R1" curve.

A type is selected by the Certificate Authority Service based on create_type. The rest of the values are validated accordingly and finally a certificate is made based on the selected type.

cert_extensions can be specified to set X509v3 extensions.

Create an Internal Certificate Authority

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificateauthority.create",
    "params": [{
        "name": "internal_ca",
        "key_length": 2048,
        "lifetime": 3600,
        "city": "Nashville",
        "common": "domain1.com",
        "country": "US",
        "email": "dev@ixsystems.com",
        "organization": "iXsystems",
        "state": "Tennessee",
        "digest_algorithm": "SHA256"
        "create_type": "CA_CREATE_INTERNAL"
    }]
}

Create an Imported Certificate Authority

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificateauthority.create",
    "params": [{
        "name": "imported_ca",
        "certificate": "Certificate string",
        "privatekey": "Private key string",
        "create_type": "CA_CREATE_IMPORTED"
    }]
}
certificateauthority.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete a Certificate Authority of id

Delete a Certificate Authority of id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificateauthority.delete",
    "params": [
        1
    ]
}
certificateauthority.profiles

Returns a dictionary of predefined options for specific use cases i.e OpenVPN certificate authority configurations which can be used for creating certificate authorities.

certificateauthority.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
certificateauthority.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "revoked": { "type": "boolean" }, "ca_id": { "type": "integer" }, "csr_cert_id": { "type": "integer" }, "create_type": { "type": "string", "enum": [ "CA_SIGN_CSR" ] }, "name": { "type": "string" } }, "additionalProperties": false, "title": "ca_update", "default": {} }

Update Certificate Authority of id

Only name and revoked attribute can be updated.

If revoked is enabled, the CA and its complete chain is marked as revoked and added to the CA's certificate revocation list.

Update a Certificate Authority of id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "certificateauthority.update",
    "params": [
        1,
        {
            "name": "updated_ca_name"
        }
    ]
}

cloudsync

cloudsync.abort
Arguments:
{ "type": "integer", "title": "id" }

Aborts cloud sync task.

cloudsync.common_task_schema
-
cloudsync.create
Arguments:
{ "type": "object", "properties": { "description": { "type": "string" }, "direction": { "type": "string", "enum": [ "PUSH", "PULL" ] }, "transfer_mode": { "type": "string", "enum": [ "SYNC", "COPY", "MOVE" ] }, "path": { "type": "string" }, "credentials": { "type": "integer" }, "encryption": { "type": "boolean" }, "filename_encryption": { "type": "boolean" }, "encryption_password": { "type": "string" }, "encryption_salt": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "follow_symlinks": { "type": "boolean" }, "transfers": { "type": [ "integer", "null" ] }, "bwlimit": { "type": "array", "items": [ { "type": "object" } ] }, "exclude": { "type": "array", "items": [ { "type": "string" } ] }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "snapshot": { "type": "boolean" }, "pre_script": { "type": "string" }, "post_script": { "type": "string" }, "args": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "cloud_sync_create", "default": {} }

Creates a new cloud_sync entry.

Create a new cloud_sync using amazon s3 attributes, which is supposed to run every hour.

{
  "id": "6841f242-840a-11e6-a437-00e04d680384",
  "msg": "method",
  "method": "cloudsync.create",
  "params": [{
    "description": "s3 sync",
    "path": "/mnt/tank",
    "credentials": 1,
    "minute": "00",
    "hour": "*",
    "daymonth": "*",
    "month": "*",
    "attributes": {
      "bucket": "mybucket",
      "folder": ""
    },
    "enabled": true
  }]
}
cloudsync.delete
Arguments:
{ "type": "integer", "title": "id" }

Deletes cloud_sync entry id.

cloudsync.list_buckets
Arguments:
{ "type": "integer", "title": "credentials_id" }
-
cloudsync.list_directory
Arguments:
{ "type": "object", "properties": { "credentials": { "type": "integer" }, "encryption": { "type": "boolean" }, "filename_encryption": { "type": "boolean" }, "encryption_password": { "type": "string" }, "encryption_salt": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "args": { "type": "string" } }, "additionalProperties": false, "title": "cloud_sync_ls", "default": {} }

List contents of a remote bucket / directory.

If remote supports buckets, path is constructed by two keys "bucket"/"folder" in attributes. If remote does not support buckets, path is constructed using "folder" key only in attributes. "folder" is directory name and "bucket" is bucket name for remote.

Path examples:

S3 Service bucketname/directory/name

Dropbox Service directory/name

credentials is a valid id of a Cloud Sync Credential which will be used to connect to the provider.

cloudsync.onedrive_list_drives
Arguments:
{ "type": "object", "properties": { "client_id": { "type": "string" }, "client_secret": { "type": "string" }, "token": { "type": "string" } }, "additionalProperties": false, "title": "onedrive_list_drives", "default": {} }

Lists all available drives and their types for given Microsoft OneDrive credentials.

{
  "id": "6841f242-840a-11e6-a437-00e04d680384",
  "msg": "method",
  "method": "cloudsync.onedrive_list_drives",
  "params": [{
    "client_id": "...",
    "client_secret": "",
    "token": "{...}",
  }]
}

Returns

[{"drive_type": "PERSONAL", "drive_id": "6bb903a25ad65e46"}]
cloudsync.providers

Returns a list of dictionaries of supported providers for Cloud Sync Tasks.

credentials_schema is JSON schema for credentials attributes.

task_schema is JSON schema for task attributes.

buckets is a boolean value which is set to "true" if provider supports buckets.

Example of a single provider:

[ { "name": "AMAZON_CLOUD_DRIVE", "title": "Amazon Cloud Drive", "credentials_schema": [ { "property": "client_id", "schema": { "title": "Amazon Application Client ID", "required": true, "type": "string" } }, { "property": "client_secret", "schema": { "title": "Application Key", "required": true, "type": "string" } } ], "credentials_oauth": null, "buckets": false, "bucket_title": "Bucket", "task_schema": [] } ]

cloudsync.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query all Cloud Sync Tasks with query-filters and query-options.

cloudsync.restore
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "description": { "type": "string" }, "transfer_mode": { "type": "string", "enum": [ "SYNC", "COPY" ] }, "path": { "type": "string" } }, "additionalProperties": false, "title": "cloud_sync_restore", "default": {} }

Create the opposite of cloud sync task id (PULL if it was PUSH and vice versa).

cloudsync.sync
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "dry_run": { "type": "boolean" } }, "additionalProperties": false, "title": "cloud_sync_sync_options", "default": {} }

Run the cloud_sync job id, syncing the local data to remote.

cloudsync.sync_onetime
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "description": { "type": "string" }, "direction": { "type": "string", "enum": [ "PUSH", "PULL" ] }, "transfer_mode": { "type": "string", "enum": [ "SYNC", "COPY", "MOVE" ] }, "path": { "type": "string" }, "credentials": { "type": "integer" }, "encryption": { "type": "boolean" }, "filename_encryption": { "type": "boolean" }, "encryption_password": { "type": "string" }, "encryption_salt": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "follow_symlinks": { "type": "boolean" }, "transfers": { "type": [ "integer", "null" ] }, "bwlimit": { "type": "array", "items": [ { "type": "object" } ] }, "exclude": { "type": "array", "items": [ { "type": "string" } ] }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "snapshot": { "type": "boolean" }, "pre_script": { "type": "string" }, "post_script": { "type": "string" }, "args": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "cloud_sync_create", "default": {} }
{ "type": "object", "properties": { "dry_run": { "type": "boolean" } }, "additionalProperties": false, "title": "cloud_sync_sync_options", "default": {} }

Run cloud sync task without creating it.

cloudsync.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "description": { "type": "string" }, "direction": { "type": "string", "enum": [ "PUSH", "PULL" ] }, "transfer_mode": { "type": "string", "enum": [ "SYNC", "COPY", "MOVE" ] }, "path": { "type": "string" }, "credentials": { "type": "integer" }, "encryption": { "type": "boolean" }, "filename_encryption": { "type": "boolean" }, "encryption_password": { "type": "string" }, "encryption_salt": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "follow_symlinks": { "type": "boolean" }, "transfers": { "type": [ "integer", "null" ] }, "bwlimit": { "type": "array", "items": [ { "type": "object" } ] }, "exclude": { "type": "array", "items": [ { "type": "string" } ] }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "snapshot": { "type": "boolean" }, "pre_script": { "type": "string" }, "post_script": { "type": "string" }, "args": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "cloud_sync_create", "default": {} }

Updates the cloud_sync entry id with data.

cloudsync.credentials

cloudsync.credentials.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "provider": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "cloud_sync_credentials_create", "default": {} }

Create Cloud Sync Credentials.

attributes is a dictionary of valid values which will be used to authorize with the provider.

cloudsync.credentials.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete Cloud Sync Credentials of id.

cloudsync.credentials.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
cloudsync.credentials.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "provider": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "cloud_sync_credentials_create", "default": {} }

Update Cloud Sync Credentials of id.

cloudsync.credentials.verify
Arguments:
{ "type": "object", "properties": { "provider": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "cloud_sync_credentials_verify", "default": {} }

Verify if attributes provided for provider are authorized by the provider.

config

config.reset
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "reboot": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Reset database to configuration defaults.

If reboot is true this job will reboot the system after its completed with a delay of 10 seconds.

config.save
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be downloaded from this endpoint. Please refer to the Jobs section to download a file.
Arguments:
{ "type": "object", "properties": { "secretseed": { "type": "boolean" }, "pool_keys": { "type": "boolean" }, "root_authorized_keys": { "type": "boolean" } }, "additionalProperties": false, "title": "configsave", "default": {} }

Create a bundle of security-sensitive information. These options select which information is included in the bundle:

secretseed: include password secret seed.

pool_keys: include GELI encryption keys.

root_authorized_keys: include "authorized_keys" file for the root user.

If none of these options are set, the bundle is not generated and the database file is provided.

config.upload
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.

Accepts a configuration file via job pipe.

core

core.bulk
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "method", "type": "string" }
{ "type": "array", "title": "params", "default": [], "items": [ { "type": "null" } ] }
{ "title": "description", "default": null, "type": [ "string", "null" ] }

Will loop on a list of items for the given method, returning a list of dicts containing a result and error key.

description contains format string for job progress (e.g. "Deleting snapshot {0[dataset]}@{0[name]}")

Result will be the message returned by the method being called, or a string of an error, in which case the error key will be the exception

core.debug
Arguments:
{ "title": "engine", "type": "string", "enum": [ "PTVS", "PYDEV", "REMOTE_PDB" ] }
{ "type": "object", "properties": { "secret": { "type": "string" }, "bind_address": { "type": "string" }, "bind_port": { "type": "integer" }, "host": { "type": "string" }, "wait_attach": { "type": "boolean" }, "local_path": { "type": "string" }, "threaded": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Setup middlewared for remote debugging.

engines: - PTVS: Python Visual Studio - PYDEV: Python Dev (Eclipse/PyCharm) - REMOTE_PDB: Remote vanilla PDB (over TCP sockets)

options: - secret: password for PTVS - host: required for PYDEV, hostname of local computer (developer workstation) - local_path: required for PYDEV, path for middlewared source in local computer (e.g. /home/user/freenas/src/middlewared/middlewared - threaded: run debugger in a new thread instead of event loop

core.download
Arguments:
{ "title": "method", "type": "string" }
{ "type": "array", "title": "args", "default": [], "items": [ { "type": "null" } ] }
{ "title": "filename", "type": "string" }

Core helper to call a job marked for download.

Returns the job id and the URL for download.

core.get_events

Returns metadata for every possible event emitted from websocket server.

core.get_jobs
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Get the long running jobs.

core.get_methods
Arguments:
{ "title": "service", "default": null, "type": [ "string", "null" ] }

Return methods metadata of every available service.

service parameter is optional and filters the result for a single service.

core.get_services

Returns a list of all registered services.

core.job_abort
Arguments:
{ "type": "integer", "title": "id" }
-
core.job_update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "progress": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "job-update", "default": {} }
-
core.job_wait
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
-
core.ping

Utility method which just returns "pong". Can be used to keep connection/authtoken alive instead of using "ping" protocol message.

core.ping_remote
Arguments:
{ "type": "object", "properties": { "type": { "type": "string", "enum": [ "ICMP", "ICMPV4", "ICMPV6" ] }, "hostname": { "type": "string" }, "timeout": { "type": "integer" } }, "additionalProperties": false, "title": "options", "default": {} }

Method that will send an ICMP echo request to "hostname" and will wait up to "timeout" for a reply.

core.resize_shell
Arguments:
{ "title": "id", "type": "string" }
{ "type": "integer", "title": "cols" }
{ "type": "integer", "title": "rows" }

Resize terminal session (/websocket/shell) to cols x rows

core.sessions
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Get currently open websocket sessions.

cronjob

cronjob.create
Arguments:
{ "type": "object", "properties": { "enabled": { "type": "boolean" }, "stderr": { "type": "boolean" }, "stdout": { "type": "boolean" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "command": { "type": "string" }, "description": { "type": "string" }, "user": { "type": "string" } }, "additionalProperties": false, "title": "cron_job_create", "default": {} }

Create a new cron job.

stderr and stdout are boolean values which if true, represent that we would like to suppress standard error / standard output respectively.

Create a cron job which executes touch /tmp/testfile after every 5 minutes.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "cronjob.create",
    "params": [{
        "enabled": true,
        "schedule": {
            "minute": "5",
            "hour": "*",
            "dom": "*",
            "month": "*",
            "dow": "*"
        },
        "command": "touch /tmp/testfile",
        "description": "Test command",
        "user": "root",
        "stderr": true,
        "stdout": true
    }]
}
cronjob.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete cronjob of id.

cronjob.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
cronjob.run
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "boolean", "title": "skip_disabled", "default": false }

Job to run cronjob task of id.

cronjob.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "enabled": { "type": "boolean" }, "stderr": { "type": "boolean" }, "stdout": { "type": "boolean" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "command": { "type": "string" }, "description": { "type": "string" }, "user": { "type": "string" } }, "additionalProperties": false, "title": "cron_job_create", "default": {} }

Update cronjob of id.

device

device.get_info
Arguments:
{ "title": "type", "type": "string", "enum": [ "SERIAL", "DISK" ] }

Get info for SERIAL/DISK device types.

directoryservices

directoryservices.cache_refresh
-
directoryservices.get_state

DISABLED Directory Service is disabled.

FAULTED Directory Service is enabled, but not HEALTHY. Review logs and generated alert messages to debug the issue causing the service to be in a FAULTED state.

LEAVING Directory Service is in process of stopping.

JOINING Directory Service is in process of starting.

HEALTHY Directory Service is enabled, and last status check has passed.

disk

disk.decrypt
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "array", "title": "devices", "items": [ { "type": "string" } ] }
{ "title": "passphrase", "default": null, "type": [ "string", "null" ] }

Decrypt devices using uploaded encryption key

disk.get_encrypted
Arguments:
{ "type": "object", "properties": { "unused": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Get all geli providers

It might be an entire disk or a partition of type freebsd-zfs.

Before a geli encrypted pool can be imported, disks used in the pool should be decrypted and then pool import can proceed as desired. In that case unused can be passed as true, to find out which disks are geli encrypted but not being used by active ZFS pools.

disk.get_unused
Arguments:
{ "type": "boolean", "title": "join_partitions", "default": false }

Helper method to get all disks that are not in use, either by the boot pool or the user pools.

disk.label_to_dev
-
disk.overprovision
Arguments:
{ "title": "devname", "type": "string" }
{ "type": "integer", "title": "size" }

Sets overprovision of disk devname to size gigabytes

disk.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query disks.

The following extra options are supported:

 include_expired: true - will also include expired disks (default: false)
 passwords: true - will not hide KMIP password for the disks (default: false)
 pools: true - will join pool name for each disk (default: false)
disk.sed_dev_name
-
disk.smart_attributes
Arguments:
{ "title": "name", "type": "string" }

Returns S.M.A.R.T. attributes values for specified disk name.

disk.spindown
Arguments:
{ "title": "disk", "type": "string" }

Spin down disk by device name

Spin down ada0

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "disk.spindown",
    "params": ["ada0"]
}
disk.temperature
Arguments:
{ "title": "name", "type": "string" }
{ "title": "powermode", "default": "NEVER", "type": "string", "enum": [ "NEVER", "SLEEP", "STANDBY", "IDLE" ] }

Returns temperature for device name using specified S.M.A.R.T. powermode.

disk.temperatures
Arguments:
{ "type": "array", "title": "names", "items": [ { "type": "string" } ] }
{ "title": "powermode", "default": "NEVER", "type": "string", "enum": [ "NEVER", "SLEEP", "STANDBY", "IDLE" ] }

Returns temperatures for a list of devices (runs in parallel). See disk.temperature documentation for more details.

disk.unoverprovision
Arguments:
{ "title": "devname", "type": "string" }

Removes overprovisioning of disk devname

disk.update
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "togglesmart": { "type": "boolean" }, "acousticlevel": { "type": "string", "enum": [ "DISABLED", "MINIMUM", "MEDIUM", "MAXIMUM" ] }, "advpowermgmt": { "type": "string", "enum": [ "DISABLED", "1", "64", "127", "128", "192", "254" ] }, "description": { "type": "string" }, "hddstandby": { "type": "string", "enum": [ "ALWAYS ON", "5", "10", "20", "30", "60", "120", "180", "240", "300", "330" ] }, "hddstandby_force": { "type": "boolean" }, "passwd": { "type": "string" }, "smartoptions": { "type": "string" }, "critical": { "type": [ "integer", "null" ] }, "difference": { "type": [ "integer", "null" ] }, "informational": { "type": [ "integer", "null" ] }, "enclosure": { "type": "object", "properties": { "number": { "type": "integer" }, "slot": { "type": "integer" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "disk_update", "default": {} }

Update disk of id.

If extra options need to be passed to SMART which we don't already support, they can be passed by smartoptions.

critical, informational and difference are integer values on which alerts for SMART are configured if the disk temperature crosses the assigned threshold for each respective attribute. If they are set to null, then SMARTD config values are used as defaults.

Email of log level LOG_CRIT is issued when disk temperature crosses critical.

Email of log level LOG_INFO is issued when disk temperature crosses informational.

If temperature of a disk changes by difference degree Celsius since the last report, SMART reports this.

disk.wipe
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "dev", "type": "string" }
{ "title": "mode", "type": "string", "enum": [ "QUICK", "FULL", "FULL_RANDOM" ] }
{ "type": "boolean", "title": "synccache", "default": true }
{ "type": "object", "properties": { "configure_swap": { "type": "boolean" } }, "additionalProperties": false, "title": "swap_removal_options", "default": {} }

Performs a wipe of a disk dev. It can be of the following modes: - QUICK: clean the first few and last megabytes of every partition and disk - FULL: write whole disk with zero's - FULL_RANDOM: write whole disk with random bytes

dns

dns.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query Name Servers with query-filters and query-options.

dyndns

dyndns.config
-
dyndns.provider_choices

List supported Dynamic DNS Service Providers.

dyndns.update
Arguments:
{ "type": "object", "properties": { "provider": { "type": "string" }, "checkip_ssl": { "type": "boolean" }, "checkip_server": { "type": "string" }, "checkip_path": { "type": "string" }, "ssl": { "type": "boolean" }, "custom_ddns_server": { "type": "string" }, "custom_ddns_path": { "type": "string" }, "domain": { "type": "array", "items": [ { "type": "string" } ] }, "username": { "type": "string" }, "password": { "type": "string" }, "period": { "type": "integer" } }, "additionalProperties": false, "title": "dyndns_update", "default": {} }

Update dynamic dns service configuration.

period indicates how often the IP is checked in seconds.

ssl if set to true, makes sure that HTTPS is used for the connection to the server which updates the DNS record.

ec2

ec2.Meta
-
ec2.instance_id
-
ec2.set_ntp_servers
-
ec2.set_root_ssh_public_key
-
ec2.setup
-

enclosure

enclosure.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
enclosure.set_slot_status
Arguments:
{ "title": "enclosure_id", "type": "string" }
{ "type": "integer", "title": "slot" }
{ "title": "status", "type": "string", "enum": [ "CLEAR", "FAULT", "IDENTIFY" ] }
-
enclosure.update
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "label": { "type": "string" } }, "additionalProperties": false, "title": "enclosure_update", "default": {} }
-

enterprise

failover

failover.call_remote
Arguments:
{ "title": "method", "type": "string" }
{ "type": "array", "title": "args", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "timeout": { "type": "integer" }, "job": { "type": "boolean" }, "job_return": { "type": [ "boolean", "null" ] }, "callback": { "anyOf": [ { "type": "string" }, { "type": "integer" }, { "type": "boolean" }, { "type": "object" }, { "type": "array" } ], "title": "callback", "nullable": false } }, "additionalProperties": false, "title": "options", "default": {} }

Call a method in the other node.

failover.config
-
failover.control
Arguments:
{ "title": "action", "type": "string", "enum": [ "ENABLE", "DISABLE" ] }
{ "type": "object", "properties": { "active": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }
-
failover.disabled_reasons

Returns a list of reasons why failover is not enabled/functional.

NO_VOLUME - There are no pools configured. NO_VIP - There are no interfaces configured with Virtual IP. NO_SYSTEM_READY - Other storage controller has not finished booting. NO_PONG - Other storage controller is not communicable. NO_FAILOVER - Failover is administratively disabled. NO_LICENSE - Other storage controller has no license. DISAGREE_CARP - Nodes CARP states do not agree. MISMATCH_DISKS - The storage controllers do not have the same quantity of disks. NO_CRITICAL_INTERFACES - No network interfaces are marked critical for failover.

failover.force_master

Force this controller to become MASTER.

failover.get_ips

Get a list of IPs which can be accessed for management via UI.

failover.hardware

Returns the hardware type for an HA system. ECHOSTREAM ECHOWARP PUMA SBB ULTIMATE BHYVE MANUAL

failover.in_progress

Returns True if there is an ongoing failover event.

failover.licensed

Checks whether this instance is licensed as a HA unit.

failover.node

Returns the slot position in the chassis that the controller is located. A - First node B - Seconde Node MANUAL - slot position in chassis could not be determined

failover.status

Get the current HA status.

Returns: MASTER BACKUP ELECTING IMPORTING ERROR SINGLE

failover.sync_from_peer

Sync database and files from the other controller.

failover.sync_to_peer
Arguments:
{ "type": "object", "properties": { "reboot": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Sync database and files to the other controller.

reboot as true will reboot the other controller after syncing.

failover.unlock
Arguments:
{ "type": "object", "properties": { "pools": { "type": "array", "items": [ { "type": "object" } ] }, "datasets": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "options", "default": {} }

Unlock pools in HA, syncing passphrase between controllers and forcing this controller to be MASTER importing the pools.

failover.update
Arguments:
{ "type": "object", "properties": { "disabled": { "type": "boolean" }, "timeout": { "type": "integer" }, "master": { "type": [ "boolean", "null" ] } }, "additionalProperties": false, "title": "failover_update", "default": {} }

Update failover state.

disabled When true indicates that HA will be disabled. master Marks the particular node in the chassis as the master node. The standby node will have the opposite value.

timeout is the time to WAIT until a failover occurs when a network event occurs on an interface that is marked critical for failover AND HA is enabled and working appropriately.

The default time to wait is 2 seconds.
**NOTE**
    This setting does NOT effect the `disabled` or `master` parameters.
failover.upgrade
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "object", "properties": { "train": { "type": "string" } }, "additionalProperties": false, "title": "failover_upgrade", "default": {} }

Upgrades both controllers.

Files will be downloaded to the Active Controller and then transferred to the Standby Controller.

Upgrade process will start concurrently on both nodes.

Once both upgrades are applied, the Standby Controller will reboot. This job will wait for that job to complete before finalizing.

failover.upgrade_finish
Job This endpoint is a Job. Please refer to the Jobs section for details.

Perform the last stage of an HA upgrade.

This will activate the new boot environment on the Standby Controller and reboot it.

failover.upgrade_pending

Verify if HA upgrade is pending.

upgrade_finish needs to be called to finish the HA upgrade process if this method returns true.

failover.enclosure

failover.fenced

failover.internal_interface

failover.status

failover.vip

fcport

fcport.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
fcport.update
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "mode": { "type": "string", "enum": [ "INITIATOR", "TARGET", "DISABLED" ] }, "target": { "type": [ "integer", "null" ] } }, "additionalProperties": false, "title": "fcport_update", "default": {} }
-

filesystem

filesystem.acl_is_trivial
Arguments:
{ "title": "path", "type": "string" }

Returns True if the ACL can be fully expressed as a file mode without losing any access rules, or if the path does not support NFSv4 ACLs (for example a path on a tmpfs filesystem).

filesystem.chown
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "path": { "type": "string" }, "uid": { "type": [ "integer", "null" ] }, "gid": { "type": [ "integer", "null" ] }, "options": { "type": "object", "properties": { "recursive": { "type": "boolean" }, "traverse": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "filesystem_ownership", "default": {} }

Change owner or group of file at path.

uid and gid specify new owner of the file. If either key is absent or None, then existing value on the file is not changed.

recursive performs action recursively, but does not traverse filesystem mount points.

If traverse and recursive are specified, then the chown operation will traverse filesystem mount points.

filesystem.default_acl_choices

Get list of default ACL types.

filesystem.get
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be downloaded from this endpoint. Please refer to the Jobs section to download a file.
Arguments:
{ "title": "path", "type": "string" }

Job to get contents of path.

filesystem.get_default_acl
Arguments:
{ "title": "acl_type", "default": "OPEN", "type": "string", "enum": [ "OPEN", "RESTRICTED", "HOME", "DOMAIN_HOME" ] }
{ "title": "share_type", "default": "NONE", "type": "string", "enum": [ "NONE", "AFP", "SMB", "NFS" ] }

Returns a default ACL depending on the usage specified by acl_type. If an admin group is defined, then an entry granting it full control will be placed at the top of the ACL. Optionally may pass share_type to argument to get share-specific template ACL.

filesystem.getacl
Arguments:
{ "title": "path", "type": "string" }
{ "type": "boolean", "title": "simplified", "default": true }

Return ACL of a given path. This may return a POSIX1e ACL or a NFSv4 ACL. The acl type is indicated by the ACLType key.

Errata about ACLType NFSv4:

simplified returns a shortened form of the ACL permset and flags.

TRAVERSE sufficient rights to traverse a directory, but not read contents.

READ sufficient rights to traverse a directory, and read file contents.

MODIFIY sufficient rights to traverse, read, write, and modify a file. Equivalent to modify_set.

FULL_CONTROL all permissions.

If the permisssions do not fit within one of the pre-defined simplified permissions types, then the full ACL entry will be returned.

In all cases we replace USER_OBJ, GROUP_OBJ, and EVERYONE with owner@, group@, everyone@ for consistency with getfacl and setfacl. If one of aforementioned special tags is used, 'id' must be set to None.

An inheriting empty everyone@ ACE is appended to non-trivial ACLs in order to enforce Windows expectations regarding permissions inheritance. This entry is removed from NT ACL returned to SMB clients when 'ixnas' samba VFS module is enabled. We also remove it here to avoid confusion.

filesystem.listdir
Arguments:
{ "title": "path", "type": "string" }
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Get the contents of a directory.

Each entry of the list consists of: name(str): name of the file path(str): absolute path of the entry realpath(str): absolute real path of the entry (if SYMLINK) type(str): DIRECTORY | FILESYSTEM | SYMLINK | OTHER size(int): size of the entry mode(int): file mode/permission uid(int): user id of entry owner gid(int): group id of entry onwer acl(bool): extended ACL is present on file

filesystem.put
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "title": "path", "type": "string" }
{ "type": "object", "properties": { "append": { "type": "boolean" }, "mode": { "type": "integer" } }, "additionalProperties": false, "title": "options", "default": {} }

Job to put contents to path.

filesystem.setacl
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "path": { "type": "string" }, "uid": { "type": [ "integer", "null" ] }, "gid": { "type": [ "integer", "null" ] }, "dacl": { "type": "array", "items": [ { "type": "object" }, { "type": "object" } ] }, "nfs41_flags": { "type": "object", "properties": { "autoinherit": { "type": "boolean" }, "protected": { "type": "boolean" } }, "additionalProperties": false }, "acltype": { "type": "string", "enum": [ "NFS4", "POSIX1E", "RICH" ] }, "options": { "type": "object", "properties": { "stripacl": { "type": "boolean" }, "recursive": { "type": "boolean" }, "traverse": { "type": "boolean" }, "canonicalize": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "filesystem_acl", "default": {} }

Set ACL of a given path. Takes the following parameters: path full path to directory or file.

dacl "simplified" ACL here or a full ACL.

uid the desired UID of the file user. If set to None (the default), then user is not changed.

gid the desired GID of the file group. If set to None (the default), then group is not changed.

recursive apply the ACL recursively

traverse traverse filestem boundaries (ZFS datasets)

strip convert ACL to trivial. ACL is trivial if it can be expressed as a file mode without losing any access rules.

canonicalize reorder ACL entries so that they are in concanical form as described in the Microsoft documentation MS-DTYP 2.4.5 (ACL)

In all cases we replace USER_OBJ, GROUP_OBJ, and EVERYONE with owner@, group@, everyone@ for consistency with getfacl and setfacl. If one of aforementioned special tags is used, 'id' must be set to None.

An inheriting empty everyone@ ACE is appended to non-trivial ACLs in order to enforce Windows expectations regarding permissions inheritance. This entry is removed from NT ACL returned to SMB clients when 'ixnas' samba VFS module is enabled.

filesystem.setperm
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "path": { "type": "string" }, "mode": { "type": [ "string", "null" ] }, "uid": { "type": [ "integer", "null" ] }, "gid": { "type": [ "integer", "null" ] }, "options": { "type": "object", "properties": { "stripacl": { "type": "boolean" }, "recursive": { "type": "boolean" }, "traverse": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "filesystem_permission", "default": {} }

Remove extended ACL from specified path.

If mode is specified then the mode will be applied to the path and files and subdirectories depending on which options are selected. Mode should be formatted as string representation of octal permissions bits.

uid the desired UID of the file user. If set to None (the default), then user is not changed.

gid the desired GID of the file group. If set to None (the default), then group is not changed.

stripacl setperm will fail if an extended ACL is present on path, unless stripacl is set to True.

recursive remove ACLs recursively, but do not traverse dataset boundaries.

traverse remove ACLs from child datasets.

If no mode is set, and stripacl is True, then non-trivial ACLs will be converted to trivial ACLs. An ACL is trivial if it can be expressed as a file mode without losing any access rules.

filesystem.stat
Arguments:
{ "title": "path", "type": "string" }

Return the filesystem stat(2) for a given path.

filesystem.statfs
Arguments:
{ "title": "path", "type": "string" }

Return stats from the filesystem of a given path.

Raises: CallError(ENOENT) - Path not found

ftp

ftp.config
-
ftp.update
Arguments:
{ "type": "object", "properties": { "port": { "type": "integer" }, "clients": { "type": "integer" }, "ipconnections": { "type": "integer" }, "loginattempt": { "type": "integer" }, "timeout": { "type": "integer" }, "rootlogin": { "type": "boolean" }, "onlyanonymous": { "type": "boolean" }, "anonpath": { "type": [ "string", "null" ] }, "onlylocal": { "type": "boolean" }, "banner": { "type": "string" }, "filemask": { "type": "string" }, "dirmask": { "type": "string" }, "fxp": { "type": "boolean" }, "resume": { "type": "boolean" }, "defaultroot": { "type": "boolean" }, "ident": { "type": "boolean" }, "reversedns": { "type": "boolean" }, "masqaddress": { "type": "string" }, "passiveportsmin": { "type": "integer" }, "passiveportsmax": { "type": "integer" }, "localuserbw": { "type": "integer" }, "localuserdlbw": { "type": "integer" }, "anonuserbw": { "type": "integer" }, "anonuserdlbw": { "type": "integer" }, "tls": { "type": "boolean" }, "tls_policy": { "type": "string", "enum": [ "on", "off", "data", "!data", "auth", "ctrl", "ctrl+data", "ctrl+!data", "auth+data", "auth+!data" ] }, "tls_opt_allow_client_renegotiations": { "type": "boolean" }, "tls_opt_allow_dot_login": { "type": "boolean" }, "tls_opt_allow_per_user": { "type": "boolean" }, "tls_opt_common_name_required": { "type": "boolean" }, "tls_opt_enable_diags": { "type": "boolean" }, "tls_opt_export_cert_data": { "type": "boolean" }, "tls_opt_no_cert_request": { "type": "boolean" }, "tls_opt_no_empty_fragments": { "type": "boolean" }, "tls_opt_no_session_reuse_required": { "type": "boolean" }, "tls_opt_stdenvvars": { "type": "boolean" }, "tls_opt_dns_name_required": { "type": "boolean" }, "tls_opt_ip_address_required": { "type": "boolean" }, "ssltls_certificate": { "type": [ "integer", "null" ] }, "options": { "type": "string" } }, "additionalProperties": false, "title": "ftp_update", "default": {} }

Update ftp service configuration.

clients is an integer value which sets the maximum number of simultaneous clients allowed. It defaults to 32.

ipconnections is an integer value which shows the maximum number of connections per IP address. It defaults to 0 which equals to unlimited.

timeout is the maximum client idle time in seconds before client is disconnected.

rootlogin is a boolean value which when configured to true enables login as root. This is generally discouraged because of the security risks.

onlyanonymous allows anonymous FTP logins with access to the directory specified by anonpath.

banner is a message displayed to local login users after they successfully authenticate. It is not displayed to anonymous login users.

filemask sets the default permissions for newly created files which by default are 077.

dirmask sets the default permissions for newly created directories which by default are 077.

resume if set allows FTP clients to resume interrupted transfers.

fxp if set to true indicates that File eXchange Protocol is enabled. Generally it is discouraged as it makes the server vulnerable to FTP bounce attacks.

defaultroot when set ensures that for local users, home directory access is only granted if the user is a member of group wheel.

ident is a boolean value which when set to true indicates that IDENT authentication is required. If identd is not running on the client, this can result in timeouts.

masqaddress is the public IP address or hostname which is set if FTP clients cannot connect through a NAT device.

localuserbw is a positive integer value which indicates maximum upload bandwidth in KB/s for local user. Default of zero indicates unlimited upload bandwidth ( from the FTP server configuration ).

localuserdlbw is a positive integer value which indicates maximum download bandwidth in KB/s for local user. Default of zero indicates unlimited download bandwidth ( from the FTP server configuration ).

anonuserbw is a positive integer value which indicates maximum upload bandwidth in KB/s for anonymous user. Default of zero indicates unlimited upload bandwidth ( from the FTP server configuration ).

anonuserdlbw is a positive integer value which indicates maximum download bandwidth in KB/s for anonymous user. Default of zero indicates unlimited download bandwidth ( from the FTP server configuration ).

tls is a boolean value which when set indicates that encrypted connections are enabled. This requires a certificate to be configured first with the certificate service and the id of certificate is passed on in ssltls_certificate.

tls_policy defines whether the control channel, data channel, both channels, or neither channel of an FTP session must occur over SSL/TLS.

tls_opt_enable_diags is a boolean value when set, logs verbosely. This is helpful when troubleshooting a connection.

options is a string used to add proftpd(8) parameters not covered by ftp service.

group

group.create
Arguments:
{ "type": "object", "properties": { "gid": { "type": "integer" }, "name": { "type": "string" }, "smb": { "type": "boolean" }, "sudo": { "type": "boolean" }, "sudo_nopasswd": { "type": "boolean" }, "sudo_commands": { "type": "array", "items": [ { "type": "string" } ] }, "allow_duplicate_gid": { "type": "boolean" }, "users": { "type": "array", "items": [ { "type": "integer" } ] } }, "additionalProperties": false, "title": "group_create", "default": {} }

Create a new group.

If gid is not provided it is automatically filled with the next one available.

allow_duplicate_gid allows distinct group names to share the same gid.

users is a list of user ids (id attribute from user.query).

smb specifies whether the group should be mapped into an NT group.

group.delete
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "delete_users": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Delete group id.

The delete_users option deletes all users that have this group as their primary group.

group.get_group_obj
Arguments:
{ "type": "object", "properties": { "groupname": { "type": "string" }, "gid": { "type": "integer" } }, "additionalProperties": false, "title": "get_group_obj", "default": {} }

Returns dictionary containing information from struct grp for the group specified by either the groupname or gid. Bypasses group cache.

group.get_next_gid

Get the next available/free gid.

group.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query groups with query-filters and query-options. As a performance optimization, only local groups will be queried by default.

Groups from directory services such as NIS, LDAP, or Active Directory will be included in query results if the option {'extra': {'search_dscache': True}} is specified.

group.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "gid": { "type": "integer" }, "name": { "type": "string" }, "smb": { "type": "boolean" }, "sudo": { "type": "boolean" }, "sudo_nopasswd": { "type": "boolean" }, "sudo_commands": { "type": "array", "items": [ { "type": "string" } ] }, "allow_duplicate_gid": { "type": "boolean" }, "users": { "type": "array", "items": [ { "type": "integer" } ] } }, "additionalProperties": false, "title": "group_create", "default": {} }

Update attributes of an existing group.

idmap

idmap.backend_choices

Returns array of valid idmap backend choices per directory service.

idmap.backend_options

This returns full information about idmap backend options. Not all options are valid for every backend.

idmap.clear_idmap_cache
Job This endpoint is a Job. Please refer to the Jobs section for details.

Stop samba, remove the winbindd_cache.tdb file, start samba, flush samba's cache. This should be performed after finalizing idmap changes.

idmap.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "dns_domain_name": { "type": "string" }, "range_low": { "type": "integer" }, "range_high": { "type": "integer" }, "idmap_backend": { "type": "string", "enum": [ "AD", "AUTORID", "LDAP", "NSS", "RFC2307", "RID", "TDB" ] }, "certificate": { "type": [ "integer", "null" ] }, "options": { "type": "object", "properties": { "schema_mode": { "type": "string", "enum": [ "RFC2307", "SFU", "SFU20" ] }, "unix_primary_group": { "type": "boolean" }, "unix_nss_info": { "type": "boolean" }, "rangesize": { "type": "integer" }, "readonly": { "type": "boolean" }, "ignore_builtin": { "type": "boolean" }, "ldap_base_dn": { "type": "string" }, "ldap_user_dn": { "type": "string" }, "ldap_user_dn_password": { "type": "string" }, "ldap_url": { "type": "string" }, "ssl": { "type": "string", "enum": [ "OFF", "ON", "START_TLS" ] }, "linked_service": { "type": "string", "enum": [ "LOCAL_ACCOUNT", "LDAP", "NIS" ] }, "ldap_server": { "type": "string" }, "ldap_realm": { "type": "boolean" }, "bind_path_user": { "type": "string" }, "bind_path_group": { "type": "string" }, "user_cn": { "type": "boolean" }, "cn_realm": { "type": "string" }, "ldap_domain": { "type": "string" }, "sssd_compat": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "idmap_domain_create", "default": {} }

Create a new IDMAP domain. These domains must be unique. This table will be automatically populated after joining an Active Directory domain if "allow trusted domains" is set to True in the AD service configuration. There are three default system domains: DS_TYPE_ACTIVEDIRECTORY, DS_TYPE_LDAP, DS_TYPE_DEFAULT_DOMAIN. The system domains correspond with the idmap settings under Active Directory, LDAP, and SMB respectively.

name the pre-windows 2000 domain name.

DNS_domain_name DNS name of the domain.

idmap_backend provides a plugin interface for Winbind to use varying backends to store SID/uid/gid mapping tables. The correct setting depends on the environment in which the NAS is deployed.

range_low and range_high specify the UID and GID range for which this backend is authoritative.

certificate_id references the certificate ID of the SSL certificate to use for certificate-based authentication to a remote LDAP server. This parameter is not supported for all idmap backends as some backends will generate SID to ID mappings algorithmically without causing network traffic.

options are additional parameters that are backend-dependent:

AD idmap backend options: unix_primary_group If True, the primary group membership is fetched from the LDAP attributes (gidNumber). If False, the primary group membership is calculated via the "primaryGroupID" LDAP attribute.

unix_nss_info if True winbind will retrieve the login shell and home directory from the LDAP attributes. If False or if the AD LDAP entry lacks the SFU attributes the smb4.conf parameters template shell and template homedir are used.

schema_mode Defines the schema that idmap_ad should use when querying Active Directory regarding user and group information. This can be either the RFC2307 schema support included in Windows 2003 R2 or the Service for Unix (SFU) schema. For SFU 3.0 or 3.5 please choose "SFU", for SFU 2.0 please choose "SFU20". The behavior of primary group membership is controlled by the unix_primary_group option.

AUTORID idmap backend options: readonly sets the module to read-only mode. No new ranges will be allocated and new mappings will not be created in the idmap pool.

ignore_builtin ignores mapping requests for the BUILTIN domain.

LDAP idmap backend options: ldap_base_dn defines the directory base suffix to use for SID/uid/gid mapping entries.

ldap_user_dn defines the user DN to be used for authentication.

ldap_url specifies the LDAP server to use for SID/uid/gid map entries.

ssl specifies whether to encrypt the LDAP transport for the idmap backend.

NSS idmap backend options: linked_service specifies the auxiliary directory service ID provider.

RFC2307 idmap backend options: domain specifies the domain for which the idmap backend is being created. Numeric id, short-form domain name, or long-form DNS domain name of the domain may be specified. Entry must be entered as it appears in idmap.domain.

range_low and range_high specify the UID and GID range for which this backend is authoritative.

ldap_server defines the type of LDAP server to use. This can either be an LDAP server provided by the Active Directory Domain (ad) or a stand-alone LDAP server.

bind_path_user specfies the search base where user objects can be found in the LDAP server.

bind_path_group specifies the search base where group objects can be found in the LDAP server.

user_cn query cn attribute instead of uid attribute for the user name in LDAP.

realm append @realm to cn for groups (and users if user_cn is set) in LDAP queries.

ldmap_domain when using the LDAP server in the Active Directory server, this allows one to specify the domain where to access the Active Directory server. This allows using trust relationships while keeping all RFC 2307 records in one place. This parameter is optional, the default is to access the AD server in the current domain to query LDAP records.

ldap_url when using a stand-alone LDAP server, this parameter specifies the LDAP URL for accessing the LDAP server.

ldap_user_dn defines the user DN to be used for authentication.

ldap_user_dn_password is the password to be used for LDAP authentication.

realm defines the realm to use in the user and group names. This is only required when using cn_realm together with a stand-alone ldap server.

RID backend options: sssd_compat generate idmap low range based on same algorithm that SSSD uses by default.

idmap.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete a domain by id. Deletion of default system domains is not permitted.

idmap.options_choices
Arguments:
{ "title": "idmap_backend", "type": "string", "enum": [ "AD", "AUTORID", "LDAP", "NSS", "RFC2307", "RID", "TDB" ] }

Returns a list of supported keys for the specified idmap backend.

idmap.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
idmap.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "dns_domain_name": { "type": "string" }, "range_low": { "type": "integer" }, "range_high": { "type": "integer" }, "idmap_backend": { "type": "string", "enum": [ "AD", "AUTORID", "LDAP", "NSS", "RFC2307", "RID", "TDB" ] }, "certificate": { "type": [ "integer", "null" ] }, "options": { "type": "object", "properties": { "schema_mode": { "type": "string", "enum": [ "RFC2307", "SFU", "SFU20" ] }, "unix_primary_group": { "type": "boolean" }, "unix_nss_info": { "type": "boolean" }, "rangesize": { "type": "integer" }, "readonly": { "type": "boolean" }, "ignore_builtin": { "type": "boolean" }, "ldap_base_dn": { "type": "string" }, "ldap_user_dn": { "type": "string" }, "ldap_user_dn_password": { "type": "string" }, "ldap_url": { "type": "string" }, "ssl": { "type": "string", "enum": [ "OFF", "ON", "START_TLS" ] }, "linked_service": { "type": "string", "enum": [ "LOCAL_ACCOUNT", "LDAP", "NIS" ] }, "ldap_server": { "type": "string" }, "ldap_realm": { "type": "boolean" }, "bind_path_user": { "type": "string" }, "bind_path_group": { "type": "string" }, "user_cn": { "type": "boolean" }, "cn_realm": { "type": "string" }, "ldap_domain": { "type": "string" }, "sssd_compat": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "idmap_domain_create", "default": {} }

Update a domain by id.

initshutdownscript

initshutdownscript.create
Arguments:
{ "type": "object", "properties": { "type": { "type": "string", "enum": [ "COMMAND", "SCRIPT" ] }, "command": { "type": [ "string", "null" ] }, "script_text": { "type": [ "string", "null" ] }, "script": { "type": [ "string", "null" ] }, "when": { "type": "string", "enum": [ "PREINIT", "POSTINIT", "SHUTDOWN" ] }, "enabled": { "type": "boolean" }, "timeout": { "type": "integer" }, "comment": { "type": "string" } }, "additionalProperties": false, "title": "init_shutdown_script_create", "default": {} }

Create an initshutdown script task.

type indicates if a command or script should be executed at when.

There are three choices for when:

1) PREINIT - This is early in the boot process before all the services / rc scripts have started 2) POSTINIT - This is late in the boot process when most of the services / rc scripts have started 3) SHUTDOWN - This is on shutdown

timeout is an integer value which indicates time in seconds which the system should wait for the execution of script/command. It should be noted that a hard limit for a timeout is configured by the base OS, so when a script/command is set to execute on SHUTDOWN, the hard limit configured by the base OS is changed adding the timeout specified by script/command so it can be ensured that it executes as desired and is not interrupted by the base OS's limit.

initshutdownscript.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete init/shutdown task of id.

initshutdownscript.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
initshutdownscript.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "type": { "type": "string", "enum": [ "COMMAND", "SCRIPT" ] }, "command": { "type": [ "string", "null" ] }, "script_text": { "type": [ "string", "null" ] }, "script": { "type": [ "string", "null" ] }, "when": { "type": "string", "enum": [ "PREINIT", "POSTINIT", "SHUTDOWN" ] }, "enabled": { "type": "boolean" }, "timeout": { "type": "integer" }, "comment": { "type": "string" } }, "additionalProperties": false, "title": "init_shutdown_script_create", "default": {} }

Update initshutdown script task of id.

interface

interface.bridge_members_choices
Arguments:
{ "title": "id", "default": null, "type": [ "string", "null" ] }

Return available interface choices for bridge_members attribute.

id is the name of the bridge interface to update or null for a new bridge interface.

interface.checkin

After interfaces changes are committed with checkin timeout this method needs to be called within that timeout limit to prevent reverting the changes.

This is to ensure user verifies the changes went as planned and its working.

interface.checkin_waiting

Returns wether or not we are waiting user to checkin the applied network changes before they are rolled back. Value is in number of seconds or null.

interface.choices
Arguments:
{ "type": "object", "properties": { "bridge_members": { "type": "boolean" }, "lag_ports": { "type": "boolean" }, "vlan_parent": { "type": "boolean" }, "exclude": { "type": "array", "items": [ { "type": "null" } ] }, "exclude_types": { "type": "array", "items": [ { "type": "string" } ] }, "include": { "type": "array", "items": [ { "type": "null" } ] } }, "additionalProperties": false, "title": "options", "default": {} }

Choices of available network interfaces.

bridge_members will include BRIDGE members. lag_ports will include LINK_AGGREGATION ports. vlan_parent will include VLAN parent interface. exclude is a list of interfaces prefix to remove. include is a list of interfaces that should not be removed.

interface.commit
Arguments:
{ "type": "object", "properties": { "rollback": { "type": "boolean" }, "checkin_timeout": { "type": "integer" } }, "additionalProperties": false, "title": "options", "default": {} }

Commit/apply pending interfaces changes.

rollback as true (default) will rollback changes in case they fail to apply. checkin_timeout is the time in seconds it will wait for the checkin call to acknowledge the interfaces changes happened as planned from the user. If checkin does not happen within this period of time the changes will get reverted.

interface.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "description": { "type": [ "string", "null" ] }, "type": { "type": "string", "enum": [ "BRIDGE", "LINK_AGGREGATION", "VLAN" ] }, "disable_offload_capabilities": { "type": "boolean" }, "ipv4_dhcp": { "type": "boolean" }, "ipv6_auto": { "type": "boolean" }, "aliases": { "type": "array", "items": [ { "type": "object" } ] }, "failover_critical": { "type": "boolean" }, "failover_group": { "type": [ "integer", "null" ] }, "failover_vhid": { "type": [ "integer", "null" ] }, "failover_aliases": { "type": "array", "items": [ { "type": "object" } ] }, "failover_virtual_aliases": { "type": "array", "items": [ { "type": "object" } ] }, "bridge_members": { "type": "array", "items": [ { "type": "null" } ] }, "lag_protocol": { "type": "string", "enum": [ "LACP", "FAILOVER", "LOADBALANCE", "ROUNDROBIN", "NONE" ] }, "lag_ports": { "type": "array", "items": [ { "type": "string" } ] }, "vlan_parent_interface": { "type": "string" }, "vlan_tag": { "type": "integer" }, "vlan_pcp": { "type": [ "integer", "null" ] }, "mtu": { "type": [ "integer", "null" ] }, "options": { "type": "string" } }, "additionalProperties": false, "title": "interface_create", "default": {} }

Create virtual interfaces (Link Aggregation, VLAN)

For BRIDGE type the following attribute is required: bridge_members.

For LINK_AGGREGATION type the following attributes are required: lag_ports, lag_protocol.

For VLAN type the following attributes are required: vlan_parent_interface, vlan_tag and vlan_pcp.

interface.delete
Arguments:
{ "title": "id", "type": "string" }

Delete Interface of id.

interface.enable_capabilities_individually
-
interface.has_pending_changes

Returns whether there are pending interfaces changes to be applied or not.

interface.ip_in_use
Arguments:
{ "type": "object", "properties": { "ipv4": { "type": "boolean" }, "ipv6": { "type": "boolean" }, "ipv6_link_local": { "type": "boolean" }, "loopback": { "type": "boolean" }, "any": { "type": "boolean" }, "static": { "type": "boolean" } }, "additionalProperties": false, "title": "ips", "default": {} }

Get all IPv4 / Ipv6 from all valid interfaces, excluding tap and epair.

loopback will return loopback interface addresses.

any will return wildcard addresses (0.0.0.0 and ::).

static when enabled will ensure we only return static ip's configured.

Returns a list of dicts - eg -

[ { "type": "INET6", "address": "fe80::5054:ff:fe16:4aac", "netmask": 64 }, { "type": "INET", "address": "192.168.122.148", "netmask": 24, "broadcast": "192.168.122.255" }, ]

interface.lag_ports_choices
Arguments:
{ "title": "id", "default": null, "type": [ "string", "null" ] }

Return available interface choices for lag_ports attribute.

id is the name of the LAG interface to update or null for a new LAG interface.

interface.lag_setup
-
interface.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query Interfaces with query-filters and query-options

interface.rollback

Rollback pending interfaces changes.

interface.update
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "name": { "type": "string" }, "description": { "type": [ "string", "null" ] }, "disable_offload_capabilities": { "type": "boolean" }, "ipv4_dhcp": { "type": "boolean" }, "ipv6_auto": { "type": "boolean" }, "aliases": { "type": "array", "items": [ { "type": "object" } ] }, "failover_critical": { "type": "boolean" }, "failover_group": { "type": [ "integer", "null" ] }, "failover_vhid": { "type": [ "integer", "null" ] }, "failover_aliases": { "type": "array", "items": [ { "type": "object" } ] }, "failover_virtual_aliases": { "type": "array", "items": [ { "type": "object" } ] }, "bridge_members": { "type": "array", "items": [ { "type": "null" } ] }, "lag_protocol": { "type": "string", "enum": [ "LACP", "FAILOVER", "LOADBALANCE", "ROUNDROBIN", "NONE" ] }, "lag_ports": { "type": "array", "items": [ { "type": "string" } ] }, "vlan_parent_interface": { "type": "string" }, "vlan_tag": { "type": "integer" }, "vlan_pcp": { "type": [ "integer", "null" ] }, "mtu": { "type": [ "integer", "null" ] }, "options": { "type": "string" } }, "additionalProperties": false, "title": "interface_create", "default": {} }

Update Interface of id.

interface.vlan_parent_interface_choices

Return available interface choices for vlan_parent_interface attribute.

interface.vlan_setup
-
interface.websocket_interface

Returns the interface this websocket is connected to.

interface.websocket_local_ip

Returns the ip this websocket is connected to.

ipmi

ipmi.channels

Return a list with the IPMI channels available.

ipmi.identify
Arguments:
{ "type": "object", "properties": { "seconds": { "type": "integer" }, "force": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Turn on IPMI chassis identify light.

To turn off specify 0 as seconds.

ipmi.is_loaded

Returns a boolean true value indicating if ipmi device is loaded.

ipmi.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query all IPMI Channels with query-filters and query-options.

ipmi.update
Arguments:
{ "type": "integer", "title": "channel" }
{ "type": "object", "properties": { "ipaddress": { "type": "string" }, "netmask": { "type": "string" }, "gateway": { "type": "string" }, "password": { "type": "string" }, "dhcp": { "type": "boolean" }, "vlan": { "type": [ "integer", "null" ] } }, "additionalProperties": false, "title": "ipmi", "default": {} }

Update id IPMI Configuration.

ipaddress is a valid ip which will be used to connect to the IPMI interface.

netmask is the subnet mask associated with ipaddress.

dhcp is a boolean value which if unset means that ipaddress, netmask and gateway must be set.

iscsi.auth

iscsi.auth.create
Arguments:
{ "type": "object", "properties": { "tag": { "type": "integer" }, "user": { "type": "string" }, "secret": { "type": "string" }, "peeruser": { "type": "string" }, "peersecret": { "type": "string" } }, "additionalProperties": false, "title": "iscsi_auth_create", "default": {} }

Create an iSCSI Authorized Access.

tag should be unique among all configured iSCSI Authorized Accesses.

secret and peersecret should have length between 12-16 letters inclusive.

peeruser and peersecret are provided only when configuring mutual CHAP. peersecret should not be similar to secret.

iscsi.auth.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete iSCSI Authorized Access of id.

iscsi.auth.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
iscsi.auth.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "tag": { "type": "integer" }, "user": { "type": "string" }, "secret": { "type": "string" }, "peeruser": { "type": "string" }, "peersecret": { "type": "string" } }, "additionalProperties": false, "title": "iscsi_auth_create", "default": {} }

Update iSCSI Authorized Access of id.

iscsi.extent

iscsi.extent.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "type": { "type": "string", "enum": [ "DISK", "FILE" ] }, "disk": { "type": [ "string", "null" ] }, "serial": { "type": [ "string", "null" ] }, "path": { "type": [ "string", "null" ] }, "filesize": { "type": "integer" }, "blocksize": { "type": "integer" }, "pblocksize": { "type": "boolean" }, "avail_threshold": { "type": [ "integer", "null" ] }, "comment": { "type": "string" }, "insecure_tpc": { "type": "boolean" }, "xen": { "type": "boolean" }, "rpm": { "type": "string", "enum": [ "UNKNOWN", "SSD", "5400", "7200", "10000", "15000" ] }, "ro": { "type": "boolean" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "iscsi_extent_create", "default": {} }

Create an iSCSI Extent.

When type is set to FILE, attribute filesize is used and it represents number of bytes. filesize if not zero should be a multiple of blocksize. path is a required attribute with type set as FILE and it should be ensured that it does not come under a jail root.

With type being set to DISK, a valid ZVOL or DISK should be provided.

insecure_tpc when enabled allows an initiator to bypass normal access control and access any scannable target. This allows xcopy operations otherwise blocked by access control.

xen is a boolean value which is set to true if Xen is being used as the iSCSI initiator.

ro when set to true prevents the initiator from writing to this LUN.

iscsi.extent.delete
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "boolean", "title": "remove", "default": false }
{ "type": "boolean", "title": "force", "default": false }

Delete iSCSI Extent of id.

If id iSCSI Extent's type was configured to FILE, remove can be set to remove the configured file.

iscsi.extent.disk_choices
Arguments:
{ "type": "array", "title": "exclude", "default": [], "items": [ { "type": "null" } ] }

Exclude will exclude the path from being in the used_zvols list, allowing the user to keep the same item on update

iscsi.extent.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
iscsi.extent.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "type": { "type": "string", "enum": [ "DISK", "FILE" ] }, "disk": { "type": [ "string", "null" ] }, "serial": { "type": [ "string", "null" ] }, "path": { "type": [ "string", "null" ] }, "filesize": { "type": "integer" }, "blocksize": { "type": "integer" }, "pblocksize": { "type": "boolean" }, "avail_threshold": { "type": [ "integer", "null" ] }, "comment": { "type": "string" }, "insecure_tpc": { "type": "boolean" }, "xen": { "type": "boolean" }, "rpm": { "type": "string", "enum": [ "UNKNOWN", "SSD", "5400", "7200", "10000", "15000" ] }, "ro": { "type": "boolean" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "iscsi_extent_create", "default": {} }

Update iSCSI Extent of id.

iscsi.global

iscsi.global.alua_enabled

Returns whether iSCSI ALUA is enabled or not.

iscsi.global.config
-
iscsi.global.sessions
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Get a list of currently running iSCSI sessions. This includes initiator and target names and the unique connection IDs.

iscsi.global.update
Arguments:
{ "type": "object", "properties": { "basename": { "type": "string" }, "isns_servers": { "type": "array", "items": [ { "type": "string" } ] }, "pool_avail_threshold": { "type": [ "integer", "null" ] }, "alua": { "type": "boolean" } }, "additionalProperties": false, "title": "iscsiglobal_update", "default": {} }

alua is a no-op for FreeNAS.

iscsi.initiator

iscsi.initiator.create
Arguments:
{ "type": "object", "properties": { "initiators": { "type": "array", "items": [ { "type": "null" } ] }, "auth_network": { "type": "array", "items": [ { "type": "string" } ] }, "comment": { "type": "string" } }, "additionalProperties": false, "title": "iscsi_initiator_create", "default": {} }

Create an iSCSI Initiator.

initiators is a list of initiator hostnames which are authorized to access an iSCSI Target. To allow all possible initiators, initiators can be left empty.

auth_network is a list of IP/CIDR addresses which are allowed to use this initiator. If all networks are to be allowed, this field should be left empty.

iscsi.initiator.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete iSCSI initiator of id.

iscsi.initiator.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
iscsi.initiator.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "initiators": { "type": "array", "items": [ { "type": "null" } ] }, "auth_network": { "type": "array", "items": [ { "type": "string" } ] }, "comment": { "type": "string" } }, "additionalProperties": false, "title": "iscsi_initiator_create", "default": {} }

Update iSCSI initiator of id.

iscsi.portal

iscsi.portal.create
Arguments:
{ "type": "object", "properties": { "comment": { "type": "string" }, "discovery_authmethod": { "type": "string", "enum": [ "NONE", "CHAP", "CHAP_MUTUAL" ] }, "discovery_authgroup": { "type": [ "integer", "null" ] }, "listen": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "iscsiportal_create", "default": {} }

Create a new iSCSI Portal.

discovery_authgroup is required for CHAP and CHAP_MUTUAL.

iscsi.portal.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete iSCSI Portal id.

iscsi.portal.listen_ip_choices

Returns possible choices for listen.ip attribute of portal create and update.

iscsi.portal.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
iscsi.portal.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "comment": { "type": "string" }, "discovery_authmethod": { "type": "string", "enum": [ "NONE", "CHAP", "CHAP_MUTUAL" ] }, "discovery_authgroup": { "type": [ "integer", "null" ] }, "listen": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "iscsiportal_create", "default": {} }

Update iSCSI Portal id.

iscsi.target

iscsi.target.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "alias": { "type": [ "string", "null" ] }, "mode": { "type": "string", "enum": [ "ISCSI", "FC", "BOTH" ] }, "groups": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "iscsi_target_create", "default": {} }

Create an iSCSI Target.

groups is a list of group dictionaries which provide information related to using a portal, initiator, authmethod and auth with this target. auth represents a valid iSCSI Authorized Access and defaults to null.

iscsi.target.delete
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "boolean", "title": "force", "default": false }

Delete iSCSI Target of id.

Deleting an iSCSI Target makes sure we delete all Associated Targets which use id iSCSI Target.

iscsi.target.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
iscsi.target.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "alias": { "type": [ "string", "null" ] }, "mode": { "type": "string", "enum": [ "ISCSI", "FC", "BOTH" ] }, "groups": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "iscsi_target_create", "default": {} }

Update iSCSI Target of id.

iscsi.targetextent

iscsi.targetextent.create
Arguments:
{ "type": "object", "properties": { "target": { "type": "integer" }, "lunid": { "type": [ "integer", "null" ] }, "extent": { "type": "integer" } }, "additionalProperties": false, "title": "iscsi_targetextent_create", "default": {} }

Create an Associated Target.

lunid will be automatically assigned if it is not provided based on the target.

iscsi.targetextent.delete
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "boolean", "title": "force", "default": false }

Delete Associated Target of id.

iscsi.targetextent.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
iscsi.targetextent.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "target": { "type": "integer" }, "lunid": { "type": "integer" }, "extent": { "type": "integer" } }, "additionalProperties": false, "title": "iscsi_targetextent_create", "default": {} }

Update Associated Target of id.

jail

jail.activate
Arguments:
{ "title": "pool", "type": "string" }

Activates a pool for iocage usage, and deactivates the rest.

jail.clean
Arguments:
{ "title": "ds_type", "type": "string", "enum": [ "ALL", "JAIL", "TEMPLATE", "RELEASE" ] }

Cleans all iocage datasets of ds_type

jail.clone
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "source_jail", "type": "string" }
{ "type": "object", "properties": { "uuid": { "type": "string" }, "pkglist": { "type": "array", "items": [ { "type": "string" } ] }, "thickjail": { "type": "boolean" }, "props": { "type": "array", "items": [ { "type": "null" } ] } }, "additionalProperties": false, "title": "clone_jail", "default": {} }
-
jail.create
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "release": { "type": "string" }, "template": { "type": "string" }, "pkglist": { "type": "array", "items": [ { "type": "string" } ] }, "uuid": { "type": "string" }, "basejail": { "type": "boolean" }, "empty": { "type": "boolean" }, "short": { "type": "boolean" }, "props": { "type": "array", "items": [ { "type": "null" } ] }, "https": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Creates a jail.

jail.default_configuration

Retrieve default configuration for iocage jails.

jail.delete
Arguments:
{ "title": "jail", "type": "string" }
{ "type": "object", "properties": { "force": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Takes a jail and destroys it.

jail.exec
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "jail", "type": "string" }
{ "type": "array", "title": "command", "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "host_user": { "type": "string" }, "jail_user": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Issues a command inside a jail.

jail.export
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "jail": { "type": "string" }, "compression_algorithm": { "type": "string", "enum": [ "ZIP", "LZMA" ] } }, "additionalProperties": false, "title": "options", "default": {} }

Export jail to compressed file.

jail.fetch
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "release": { "type": "string" }, "server": { "type": "string" }, "user": { "type": "string" }, "password": { "type": "string" }, "name": { "type": [ "string", "null" ] }, "jail_name": { "type": "string" }, "accept": { "type": "boolean" }, "https": { "type": "boolean" }, "props": { "type": "array", "items": [ { "type": "null" } ] }, "files": { "type": "array", "items": [ { "type": "null" } ] }, "branch": { "type": [ "string", "null" ] } }, "additionalProperties": false, "title": "options", "default": {} }

Fetches a release or plugin.

jail.fstab
Arguments:
{ "title": "jail", "type": "string" }
{ "type": "object", "properties": { "action": { "type": "string", "enum": [ "ADD", "REMOVE", "REPLACE", "LIST" ] }, "source": { "type": "string" }, "destination": { "type": "string" }, "fstype": { "type": "string" }, "fsoptions": { "type": "string" }, "dump": { "type": "string" }, "pass": { "type": "string" }, "index": { "type": "integer" } }, "additionalProperties": false, "title": "options", "default": {} }

Manipulate a jails fstab

jail.get_activated_pool

Returns the activated pool if there is one, or None

jail.import_image
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "jail": { "type": "string" }, "path": { "type": [ "string", "null" ] }, "compression_algorithm": { "type": [ "string", "null" ], "enum": [ "ZIP", "LZMA", null ] } }, "additionalProperties": false, "title": "options", "default": {} }

Import jail from compressed file.

compression algorithm: None indicates that middlewared is to automatically determine which compression algorithm to use based on the compressed file extension. If multiple copies are found, an exception is raised.

path is the directory where the exported jail lives. It defaults to the iocage images dataset.

jail.interface_choices

Returns a dictionary of interface choices which can be used with creating/updating jails.

jail.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query all jails with query-filters and query-options.

jail.rc_action
Arguments:
{ "title": "action", "type": "string", "enum": [ "START", "STOP", "RESTART" ] }

Does specified action on rc enabled (boot=on) jails

jail.releases_choices
Arguments:
{ "type": "boolean", "title": "remote", "default": false }

List installed or available releases which can be downloaded.

jail.restart
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "jail", "type": "string" }

Takes a jail and restarts it.

jail.start
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "jail", "type": "string" }

Takes a jail and starts it.

jail.stop
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "jail", "type": "string" }
{ "type": "boolean", "title": "force", "default": false }

Takes a jail and stops it.

jail.update
Arguments:
{ "title": "jail", "type": "string" }
{ "type": "object", "properties": { "plugin": { "type": "boolean" } }, "additionalProperties": true, "title": "jail_update", "default": {} }

Sets a jail property.

jail.update_defaults
Arguments:
{ "type": "object", "properties": {}, "additionalProperties": true, "title": "props", "default": {} }

Update default properties for iocage which will remain true for all jails moving on i.e nat_backend

jail.update_to_latest_patch
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "jail", "type": "string" }
{ "type": "boolean", "title": "update_pkgs", "default": false }

Updates specified jail to latest patch level.

jail.vnet_default_interface_choices

Returns a dictionary of interface choices which can be used with vnet_default_interface property.

kerberos

kerberos.config
-
kerberos.update
Arguments:
{ "type": "object", "properties": { "appdefaults_aux": { "type": "string" }, "libdefaults_aux": { "type": "string" } }, "additionalProperties": false, "title": "kerberos_settings_update", "default": {} }

appdefaults_aux add parameters to "appdefaults" section of the krb5.conf file.

libdefaults_aux add parameters to "libdefaults" section of the krb5.conf file.

kerberos.keytab

kerberos.keytab.create
Arguments:
{ "type": "object", "properties": { "file": { "type": "string" }, "name": { "type": "string" } }, "additionalProperties": false, "title": "kerberos_keytab_create", "default": {} }

Create a kerberos keytab. Uploaded keytab files will be merged with the system keytab under /etc/krb5.keytab.

file b64encoded kerberos keytab name name for kerberos keytab

kerberos.keytab.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete kerberos keytab by id, and force regeneration of system keytab.

kerberos.keytab.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
kerberos.keytab.system_keytab_list

Returns content of system keytab (/etc/krb5.keytab).

kerberos.keytab.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "file": { "type": "string" }, "name": { "type": "string" } }, "additionalProperties": false, "title": "kerberos_keytab_update", "default": {} }

Update kerberos keytab by id.

kerberos.keytab.upload_keytab
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" } }, "additionalProperties": false, "title": "keytab_data", "default": {} }

Upload a keytab file. This method expects the keytab file to be uploaded using the /_upload/ endpoint.

kerberos.realm

kerberos.realm.create
Arguments:
{ "type": "object", "properties": { "realm": { "type": "string" }, "kdc": { "type": "array", "items": [ { "type": "null" } ] }, "admin_server": { "type": "array", "items": [ { "type": "null" } ] }, "kpasswd_server": { "type": "array", "items": [ { "type": "null" } ] } }, "additionalProperties": false, "title": "kerberos_realm_create", "default": {} }

Create a new kerberos realm. This will be automatically populated during the domain join process in an Active Directory environment. Kerberos realm names are case-sensitive, but convention is to only use upper-case.

Entries for kdc, admin_server, and kpasswd_server are not required. If they are unpopulated, then kerberos will use DNS srv records to discover the correct servers. The option to hard-code them is provided due to AD site discovery. Kerberos has no concept of Active Directory sites. This means that middleware performs the site discovery and sets the kerberos configuration based on the AD site.

kerberos.realm.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete a kerberos realm by ID.

kerberos.realm.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
kerberos.realm.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "realm": { "type": "string" }, "kdc": { "type": "array", "items": [ { "type": "null" } ] }, "admin_server": { "type": "array", "items": [ { "type": "null" } ] }, "kpasswd_server": { "type": "array", "items": [ { "type": "null" } ] } }, "additionalProperties": false, "title": "kerberos_realm_create", "default": {} }

Update a kerberos realm by id. This will be automatically populated during the domain join process in an Active Directory environment. Kerberos realm names are case-sensitive, but convention is to only use upper-case.

keychaincredential

keychaincredential.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "type": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "keychain_credential_create", "default": {} }

Create a Keychain Credential

Create a Keychain Credential of any type. Every Keychain Credential has a name which is used to distinguish it from others. The following types are supported: * SSH_KEY_PAIR Which attributes are: * private_key * public_key (which can be omitted and thus automatically derived from private key) At least one attribute is required.

  • SSH_CREDENTIALS Which attributes are:
  • host
  • port (default 22)
  • username (default root)
  • private_key (Keychain Credential ID)
  • remote_host_key (you can use keychaincredential.remote_ssh_host_key_scan do discover it)
  • cipher: one of STANDARD, FAST, or DISABLED (last requires special support from both SSH server and client)
  • connect_timeout (default 10)
{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "keychaincredential.create",
    "params": [{
        "name": "Work SSH connection",
        "type": "SSH_CREDENTIALS",
        "attributes": {
            "host": "work.freenas.org",
            "private_key": 12,
            "remote_host_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMn1VjdSMatGnxbOsrneKyai+dh6d4Hm"
        }
    }]
}
keychaincredential.delete
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "cascade": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Delete Keychain Credential with specific id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "keychaincredential.delete",
    "params": [
        13
    ]
}
keychaincredential.generate_ssh_key_pair

Generate a public/private key pair

Generate a public/private key pair (useful for SSH_KEY_PAIR type)

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "keychaincredential.generate_ssh_key_pair",
    "params": []
}
keychaincredential.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
keychaincredential.remote_ssh_host_key_scan
Arguments:
{ "type": "object", "properties": { "host": { "type": "string" }, "port": { "type": "string" }, "connect_timeout": { "type": "integer" } }, "additionalProperties": false, "title": "keychain_remote_ssh_host_key_scan", "default": {} }

Discover a remote host key

Discover a remote host key (useful for SSH_CREDENTIALS)

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "keychaincredential.delete",
    "params": [{
        "host": "work.freenas.org"
    }]
}
keychaincredential.remote_ssh_semiautomatic_setup
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "url": { "type": "string" }, "token": { "type": "string" }, "password": { "type": "string" }, "username": { "type": "string" }, "private_key": { "type": "integer" }, "cipher": { "type": "string", "enum": [ "STANDARD", "FAST", "DISABLED" ] }, "connect_timeout": { "type": "integer" } }, "additionalProperties": false, "title": "keychain_remote_ssh_semiautomatic_setup", "default": {} }

Perform semi-automatic SSH connection setup with other FreeNAS machine

Perform semi-automatic SSH connection setup with other FreeNAS machine. It creates a SSH_CREDENTIALS credential with specified name that can be used to connect to FreeNAS machine with specified url and temporary auth token. Other FreeNAS machine adds private_key to allowed username's private keys. Other SSH_CREDENTIALS attributes such as cipher and connect_timeout can be specified as well.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "keychaincredential.keychain_remote_ssh_semiautomatic_setup",
    "params": [{
        "name": "Work SSH connection",
        "url": "https://work.freenas.org",
        "token": "8c8d5fd1-f749-4429-b379-9c186db4f834",
        "private_key": 12
    }]
}
keychaincredential.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "keychain_credential_create", "default": {} }

Update a Keychain Credential with specific id

Please note that you can't change type

Also you must specify full attributes value

See the documentation for create method for information on payload contents

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "keychaincredential.update",
    "params": [
        13,
        {
            "name": "Work SSH connection",
            "attributes": {
                "host": "work.ixsystems.com",
                "private_key": 12,
                "remote_host_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMn1VjdSMatGnxbOsrneKyai+dh6d4Hm"
            }
        }
    ]
}
keychaincredential.used_by
Arguments:
{ "type": "integer", "title": "id" }

Returns list of objects that use this credential.

kmip

kmip.clear_sync_pending_keys

Clear all keys which are pending to be synced between KMIP server and TN database.

For ZFS/SED keys, we remove the UID from local database with which we are able to retrieve ZFS/SED keys. It should be used with caution.

kmip.config
-
kmip.kmip_sync_pending

Returns true or false based on if there are keys which are to be synced from local database to remote KMIP server or vice versa.

kmip.sync_keys

Sync ZFS/SED keys between KMIP Server and TN database.

kmip.update
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "enabled": { "type": "boolean" }, "force_clear": { "type": "boolean" }, "manage_sed_disks": { "type": "boolean" }, "manage_zfs_keys": { "type": "boolean" }, "change_server": { "type": "boolean" }, "validate": { "type": "boolean" }, "certificate": { "type": [ "integer", "null" ] }, "certificate_authority": { "type": [ "integer", "null" ] }, "port": { "type": "integer" }, "server": { "type": "string" } }, "additionalProperties": false, "title": "kmip_update", "default": {} }

Update KMIP Server Configuration.

System currently authenticates connection with remote KMIP Server with a TLS handshake. certificate and certificate_authority determine the certs which will be used to initiate the TLS handshake with server.

validate is enabled by default. When enabled, system will test connection to server making sure it's reachable.

manage_zfs_keys/manage_sed_disks when enabled will sync keys from local database to remote KMIP server. When disabled, if there are any keys left to be retrieved from the KMIP server, it will sync them back to local database.

enabled if true, cannot be set to disabled if there are existing keys pending to be synced. However users can still perform this action by enabling force_clear.

change_server is a boolean field which allows users to migrate data between two KMIP servers. System will first migrate keys from old KMIP server to local database and then migrate the keys from local database to new KMIP server. If it is unable to retrieve all the keys from old server, this will fail. Users can bypass this by enabling force_clear.

force_clear is a boolean option which when enabled will in this case remove all pending keys to be synced from database. It should be used with extreme caution as users may end up with not having ZFS dataset or SED disks keys leaving them locked forever. It is disabled by default.

ldap

ldap.config
-
ldap.get_state

Wrapper function for 'directoryservices.get_state'. Returns only the state of the LDAP service.

ldap.schema_choices

Returns list of available LDAP schema choices.

ldap.ssl_choices

Returns list of SSL choices.

ldap.update
Arguments:
{ "type": "object", "properties": { "hostname": { "type": "array", "items": [ { "type": "null" } ] }, "basedn": { "type": "string" }, "binddn": { "type": "string" }, "bindpw": { "type": "string" }, "anonbind": { "type": "boolean" }, "ssl": { "type": "string", "enum": [ "OFF", "ON", "START_TLS" ] }, "certificate": { "type": [ "integer", "null" ] }, "validate_certificates": { "type": "boolean" }, "disable_freenas_cache": { "type": "boolean" }, "timeout": { "type": "integer" }, "dns_timeout": { "type": "integer" }, "kerberos_realm": { "type": [ "integer", "null" ] }, "kerberos_principal": { "type": "string" }, "has_samba_schema": { "type": "boolean" }, "auxiliary_parameters": { "type": "string" }, "schema": { "type": "string", "enum": [ "RFC2307", "RFC2307BIS" ] }, "enable": { "type": "boolean" } }, "additionalProperties": false, "title": "ldap_update", "default": {} }

hostname list of ip addresses or hostnames of LDAP servers with which to communicate in order of preference. Failover only occurs if the current LDAP server is unresponsive.

basedn specifies the default base DN to use when performing ldap operations. The base must be specified as a Distinguished Name in LDAP format.

binddn specifies the default bind DN to use when performing ldap operations. The bind DN must be specified as a Distinguished Name in LDAP format.

anonbind use anonymous authentication.

ssl establish SSL/TLS-protected connections to the LDAP server(s). GSSAPI signing is disabled on SSL/TLS-protected connections if kerberos authentication is used.

certificate LDAPs client certificate to be used for certificate- based authentication.

validate_certificates specifies whether to perform checks on server certificates in a TLS session. If enabled, TLS_REQCERT demand is set. The server certificate is requested. If no certificate is provided or if a bad certificate is provided, the session is immediately terminated. If disabled, TLS_REQCERT allow is set. The server certificate is requested, but all errors are ignored.

kerberos_realm in which the server is located. This parameter is only required for SASL GSSAPI authentication to the remote LDAP server.

kerberos_principal kerberos principal to use for SASL GSSAPI authentication to the remote server. If kerberos_realm is specified without a keytab, then the binddn and bindpw are used to perform to obtain the ticket necessary for GSSAPI authentication.

timeout specifies a timeout (in seconds) after which calls to synchronous LDAP APIs will abort if no response is received.

dns_timeout specifies the timeout (in seconds) after which the poll(2)/select(2) following a connect(2) returns in case of no activity for openldap. For nslcd this specifies the time limit (in seconds) to use when connecting to the directory server. This directly impacts the length of time that the LDAP service tries before failing over to a secondary LDAP URI.

has_samba_schema determines whether to configure samba to use the ldapsam passdb backend to provide SMB access to LDAP users. This feature requires the presence of Samba LDAP schema extensions on the remote LDAP server.

lldp

lldp.config
-
lldp.country_choices

Returns country choices for LLDP.

lldp.update
Arguments:
{ "type": "object", "properties": { "intdesc": { "type": "boolean" }, "country": { "type": "string" }, "location": { "type": "string" } }, "additionalProperties": false, "title": "lldp_update", "default": {} }

Update LLDP Service Configuration.

country is a two letter ISO 3166 country code required for LLDP location support.

location is an optional attribute specifying the physical location of the host.

mail

mail.config
-
mail.send
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "object", "properties": { "subject": { "type": "string" }, "text": { "type": "string" }, "html": { "type": [ "string", "null" ] }, "to": { "type": "array", "items": [ { "type": "string" } ] }, "cc": { "type": "array", "items": [ { "type": "string" } ] }, "interval": { "type": [ "integer", "null" ] }, "channel": { "type": [ "string", "null" ] }, "timeout": { "type": "integer" }, "attachments": { "type": "boolean" }, "queue": { "type": "boolean" }, "extra_headers": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "mail_message", "default": {} }
{ "type": "object", "properties": { "fromemail": { "type": "string" }, "fromname": { "type": "string" }, "outgoingserver": { "type": "string" }, "port": { "type": "integer" }, "security": { "type": "string", "enum": [ "PLAIN", "SSL", "TLS" ] }, "smtp": { "type": "boolean" }, "user": { "type": "string" }, "pass": { "type": "string" }, "oauth": { "type": "object", "properties": { "client_id": { "type": "string" }, "client_secret": { "type": "string" }, "refresh_token": { "type": "string" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "mail_update", "default": {} }

Sends mail using configured mail settings.

text will be formatted to HTML using Markdown and rendered using default E-Mail template. You can put your own HTML using html. If html is null, no HTML MIME part will be added to E-Mail.

If attachments is true, a list compromised of the following dict is required via HTTP upload: - headers(list) - name(str) - value(str) - params(dict) - content (str)

[ { "headers": [ { "name": "Content-Transfer-Encoding", "value": "base64" }, { "name": "Content-Type", "value": "application/octet-stream", "params": { "name": "test.txt" } } ], "content": "dGVzdAo=" } ]

mail.update
Arguments:
{ "type": "object", "properties": { "fromemail": { "type": "string" }, "fromname": { "type": "string" }, "outgoingserver": { "type": "string" }, "port": { "type": "integer" }, "security": { "type": "string", "enum": [ "PLAIN", "SSL", "TLS" ] }, "smtp": { "type": "boolean" }, "user": { "type": "string" }, "pass": { "type": "string" }, "oauth": { "type": "object", "properties": { "client_id": { "type": "string" }, "client_secret": { "type": "string" }, "refresh_token": { "type": "string" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "mail_update", "default": {} }

Update Mail Service Configuration.

fromemail is used as a sending address which the mail server will use for sending emails.

outgoingserver is the hostname or IP address of SMTP server used for sending an email.

security is type of encryption desired.

smtp is a boolean value which when set indicates that SMTP authentication has been enabled and user/pass are required attributes now.

multipath

multipath.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Get multipaths and their consumers.

Get all multipaths

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "multipath.query",
    "params": []
}

returns

:::javascript
[
  {
    "type": "root",
    "name": "multipath/disk5",
    "status": "OPTIMAL",
    "children": [
      {
        "type": "consumer",
        "name": "da1",
        "status": "PASSIVE",
        "lun_id": "5000cca05c9e1400"
      },
      {
        "type": "consumer",
        "name": "da23",
        "status": "ACTIVE",
        "lun_id": "5000cca05c9e1400"
      }
    ]
  }
]

network.configuration

network.configuration.config
-
network.configuration.update
Arguments:
{ "type": "object", "properties": { "hostname": { "type": "string" }, "hostname_b": { "type": "string" }, "hostname_virtual": { "type": "string" }, "domain": { "type": "string" }, "domains": { "type": "array", "items": [ { "type": "string" } ] }, "service_announcement": { "type": "object", "properties": { "netbios": { "type": "boolean" }, "mdns": { "type": "boolean" }, "wsd": { "type": "boolean" } }, "additionalProperties": false }, "ipv4gateway": { "type": "string" }, "ipv6gateway": { "type": "string" }, "nameserver1": { "type": "string" }, "nameserver2": { "type": "string" }, "nameserver3": { "type": "string" }, "httpproxy": { "type": "string" }, "netwait_enabled": { "type": "boolean" }, "netwait_ip": { "type": "array", "items": [ { "type": "string" } ] }, "hosts": { "type": "string" } }, "additionalProperties": false, "title": "global_configuration_update", "default": {} }

Update Network Configuration Service configuration.

ipv4gateway if set is used instead of the default gateway provided by DHCP.

nameserver1 is primary DNS server.

nameserver2 is secondary DNS server.

nameserver3 is tertiary DNS server.

httpproxy attribute must be provided if a proxy is to be used for network operations.

netwait_enabled is a boolean attribute which when set indicates that network services will not start at boot unless they are able to ping the addresses listed in netwait_ip list.

service_announcement determines the broadcast protocols that will be used to advertise the server. netbios enables the NetBIOS name server (NBNS), which starts concurrently with the SMB service. SMB clients will only perform NBNS lookups if SMB1 is enabled. NBNS may be required for legacy SMB clients. mdns enables multicast DNS service announcements for enabled services. wsd enables Web Service Discovery support.

network.general

network.general.summary

Retrieve general information for current Network.

Returns a dictionary. For example:

{
    "ips": {
        "vtnet0": {
            "IPV4": [
                "192.168.0.15/24"
            ]
        }
    },
    "default_routes": [
        "192.168.0.1"
    ],
    "nameservers": [
        "192.168.0.1"
    ]
}

nfs

nfs.add_principal
Arguments:
{ "type": "object", "properties": { "username": { "type": "string" }, "password": { "type": "string" } }, "additionalProperties": false, "title": "add_nfs_principal_creds", "default": {} }

Use user-provided admin credentials to kinit, add NFS SPN entries to the remote kerberos server, and then append the new entries to our system keytab.

Currently this is only supported in AD environments.

nfs.bindip_choices

Returns ip choices for NFS service to use

nfs.config
-
nfs.update
Arguments:
{ "type": "object", "properties": { "servers": { "type": "integer" }, "udp": { "type": "boolean" }, "allow_nonroot": { "type": "boolean" }, "v4": { "type": "boolean" }, "v4_v3owner": { "type": "boolean" }, "v4_krb": { "type": "boolean" }, "v4_domain": { "type": "string" }, "bindip": { "type": "array", "items": [ { "type": "string" } ] }, "mountd_port": { "type": [ "integer", "null" ] }, "rpcstatd_port": { "type": [ "integer", "null" ] }, "rpclockd_port": { "type": [ "integer", "null" ] }, "userd_manage_gids": { "type": "boolean" }, "mountd_log": { "type": "boolean" }, "statd_lockd_log": { "type": "boolean" } }, "additionalProperties": false, "title": "nfs_update", "default": {} }

Update NFS Service Configuration.

servers represents number of servers to create.

When allow_nonroot is set, it allows non-root mount requests to be served.

bindip is a list of IP's on which NFS will listen for requests. When it is unset/empty, NFS listens on all available addresses.

v4 when set means that we switch from NFSv3 to NFSv4.

v4_v3owner when set means that system will use NFSv3 ownership model for NFSv4.

v4_krb will force NFS shares to fail if the Kerberos ticket is unavailable.

v4_domain overrides the default DNS domain name for NFSv4.

mountd_port specifies the port mountd(8) binds to.

rpcstatd_port specifies the port rpc.statd(8) binds to.

rpclockd_port specifies the port rpclockd_port(8) binds to.

Update NFS Service Configuration to listen on 192.168.0.10 and use NFSv4

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.resilver.update",
    "params": [{
        "bindip": [
            "192.168.0.10"
        ],
        "v4": true
    }]
}

nis

nis.config
-
nis.get_state

Wrapper function for 'directoryservices.get_state'. Returns only the state of the NIS service.

nis.update
Arguments:
{ "type": "object", "properties": { "domain": { "type": "string" }, "servers": { "type": "array", "items": [ { "type": "null" } ] }, "secure_mode": { "type": "boolean" }, "manycast": { "type": "boolean" }, "enable": { "type": "boolean" } }, "additionalProperties": false, "title": "nis_update", "default": {} }

Update NIS Service Configuration.

domain is the name of NIS domain.

servers is a list of hostnames/IP addresses.

secure_mode when enabled sets ypbind(8) to refuse binding to any NIS server not running as root on a TCP port over 1024.

manycast when enabled sets ypbind(8) to bind to the server that responds the fastest.

enable enables and starts the NIS service. The NIS service is disabled when this value is changed to False.

openvpn.client

openvpn.client.authentication_algorithm_choices

Returns a dictionary of valid authentication algorithms which can be used with OpenVPN server.

openvpn.client.cipher_choices

Returns a dictionary of valid ciphers which can be used with OpenVPN server.

openvpn.client.config
-
openvpn.client.update
Arguments:
{ "type": "object", "properties": { "nobind": { "type": "boolean" }, "tls_crypt_auth_enabled": { "type": "boolean" }, "client_certificate": { "type": [ "integer", "null" ] }, "root_ca": { "type": [ "integer", "null" ] }, "port": { "type": "integer" }, "additional_parameters": { "type": "string" }, "authentication_algorithm": { "type": [ "string", "null" ] }, "cipher": { "type": [ "string", "null" ] }, "compression": { "type": [ "string", "null" ], "enum": [ "LZO", "LZ4" ] }, "device_type": { "type": "string", "enum": [ "TUN", "TAP" ] }, "protocol": { "type": "string", "enum": [ "UDP", "UDP4", "UDP6", "TCP", "TCP4", "TCP6" ] }, "remote": { "type": "string" }, "tls_crypt_auth": { "type": [ "string", "null" ] } }, "additionalProperties": false, "title": "openvpn_client_update", "default": {} }

Update OpenVPN Client configuration.

remote can be a valid ip address / domain which openvpn will try to connect to.

nobind must be enabled if OpenVPN client / server are to run concurrently.

openvpn.server

openvpn.server.authentication_algorithm_choices

Returns a dictionary of valid authentication algorithms which can be used with OpenVPN server.

openvpn.server.cipher_choices

Returns a dictionary of valid ciphers which can be used with OpenVPN server.

openvpn.server.client_configuration_generation
Arguments:
{ "type": "integer", "title": "client_certificate_id" }
{ "title": "server_address", "type": [ "string", "null" ] }

Returns a configuration for OpenVPN client which can be used with any client to connect to FN/TN OpenVPN server.

client_certificate_id should be a valid certificate issued for use with OpenVPN client service.

server_address if specified auto-fills the remote directive in the OpenVPN configuration enabling the end user to use the file without making any edits to connect to OpenVPN server.

openvpn.server.config
-
openvpn.server.renew_static_key

Reset OpenVPN server's TLS static key which will be used to encrypt/authenticate control channel packets.

openvpn.server.update
Arguments:
{ "type": "object", "properties": { "tls_crypt_auth_enabled": { "type": "boolean" }, "netmask": { "type": "integer" }, "server_certificate": { "type": [ "integer", "null" ] }, "port": { "type": "integer" }, "root_ca": { "type": [ "integer", "null" ] }, "server": { "type": "string" }, "additional_parameters": { "type": "string" }, "authentication_algorithm": { "type": [ "string", "null" ] }, "cipher": { "type": [ "string", "null" ] }, "compression": { "type": [ "string", "null" ], "enum": [ "LZO", "LZ4" ] }, "device_type": { "type": "string", "enum": [ "TUN", "TAP" ] }, "protocol": { "type": "string", "enum": [ "UDP", "UDP4", "UDP6", "TCP", "TCP4", "TCP6" ] }, "tls_crypt_auth": { "type": [ "string", "null" ] }, "topology": { "type": [ "string", "null" ], "enum": [ "NET30", "P2P", "SUBNET" ] } }, "additionalProperties": false, "title": "openvpn_server_update", "default": {} }

Update OpenVPN Server configuration.

When tls_crypt_auth_enabled is enabled and tls_crypt_auth not provided, a static key is automatically generated to be used with OpenVPN server.

plugin

plugin.available
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "cache": { "type": "boolean" }, "plugin_repository": { "type": "string" }, "branch": { "type": "string" } }, "additionalProperties": false, "title": "available_plugin_options", "default": {} }

List available plugins which can be fetched for plugin_repository.

plugin.branches_choices
Arguments:
{ "title": "repository", "default": null, "type": [ "string", "null" ] }
-
plugin.create
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "plugin_name": { "type": "string" }, "jail_name": { "type": "string" }, "props": { "type": "array", "items": [ { "type": "null" } ] }, "branch": { "type": [ "string", "null" ] }, "plugin_repository": { "type": "string" } }, "additionalProperties": false, "title": "plugin_create", "default": {} }

Create a Plugin.

plugin_name is the name of the plugin specified by the INDEX file in "plugin_repository" and it's JSON file.

jail_name is the name of the jail that will manage the plugin. Required.

props is a list of jail properties that the user manually sets. Plugins should always set the jail networking capability with DHCP, IP Address, or NAT properties. i.e dhcp=1 / ip4_addr="192.168.0.2" / nat=1

plugin_repository is a git URI that fetches data for plugin_name.

branch is the FreeNAS repository branch to use as the base for the plugin_repository. The default is to use the current system version. Example: 11.3-RELEASE.

plugin.defaults
Arguments:
{ "type": "object", "properties": { "refresh": { "type": "boolean" }, "plugin": { "type": "string" }, "branch": { "type": [ "string", "null" ] }, "plugin_repository": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Retrieve default properties specified for plugin in the plugin's manifest.

When refresh is specified, plugin_repository is updated before retrieving plugin's default properties.

plugin.delete
Arguments:
{ "title": "id", "type": "string" }

Delete plugin id.

plugin.official_repositories

List officially supported plugin repositories.

plugin.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query installed plugins with query-filters and query-options.

plugin.retrieve_versions_for_repos
-
plugin.update
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "plugin": { "type": "boolean" } }, "additionalProperties": true, "title": "jail_update", "default": {} }

Update plugin id.

plugin.update_plugin
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "jail", "type": "string" }
{ "type": "boolean", "title": "update_jail", "default": true }

Updates specified plugin to latest available plugin version and optionally update plugin to latest patch level.

pool

pool.attach
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "oid" }
{ "type": "object", "properties": { "target_vdev": { "type": "string" }, "new_disk": { "type": "string" }, "passphrase": { "type": "string" } }, "additionalProperties": false, "title": "pool_attach", "default": {} }

For TrueNAS Core/Enterprise platform, if the oid pool is passphrase GELI encrypted, passphrase must be specified for this operation to succeed.

target_vdev is the GUID of the vdev where the disk needs to be attached. In case of STRIPED vdev, this is the STRIPED disk GUID which will be converted to mirror. If target_vdev is mirror, it will be converted into a n-way mirror.

pool.attachments
Arguments:
{ "type": "integer", "title": "id" }

Return a list of services dependent of this pool.

Responsible for telling the user whether there is a related share, asking for confirmation.

pool.create
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "encryption": { "type": "boolean" }, "deduplication": { "type": [ "string", "null" ], "enum": [ null, "ON", "VERIFY", "OFF" ] }, "encryption_options": { "type": "object", "properties": { "generate_key": { "type": "boolean" }, "pbkdf2iters": { "type": "integer" }, "algorithm": { "type": "string", "enum": [ "AES-128-CCM", "AES-192-CCM", "AES-256-CCM", "AES-128-GCM", "AES-192-GCM", "AES-256-GCM" ] }, "passphrase": { "type": [ "string", "null" ] }, "key": { "type": [ "string", "null" ] } }, "additionalProperties": false }, "topology": { "type": "object", "properties": { "data": { "type": "array", "items": [ { "type": "object" } ] }, "special": { "type": "array", "items": [ { "type": "object" } ] }, "dedup": { "type": "array", "items": [ { "type": "object" } ] }, "cache": { "type": "array", "items": [ { "type": "object" } ] }, "log": { "type": "array", "items": [ { "type": "object" } ] }, "spares": { "type": "array", "items": [ { "type": "string" } ] } }, "additionalProperties": false } }, "additionalProperties": false, "title": "pool_create", "default": {} }

Create a new ZFS Pool.

topology is a object which requires at least one data entry. All of data entries (vdevs) require to be of the same type.

deduplication when set to ON or VERIFY makes sure that no block of data is duplicated in the pool. When VERIFY is specified, if two blocks have similar signatures, byte to byte comparison is performed to ensure that the blocks are identical. This should be used in special circumstances as it carries a significant overhead.

encryption when enabled will create an ZFS encrypted root dataset for name pool.

encryption_options specifies configuration for encryption of root dataset for name pool. encryption_options.passphrase must be specified if encryption for root dataset is desired with a passphrase as a key. Otherwise a hex encoded key can be specified by providing encryption_options.key. encryption_options.generate_key when enabled automatically generates the key to be used for dataset encryption.

It should be noted that keys are stored by the system for automatic locking/unlocking on import/export of encrypted datasets. If that is not desired, dataset should be created with a passphrase as a key.

Example of topology:

{
    "data": [
        {"type": "RAIDZ1", "disks": ["da1", "da2", "da3"]}
    ],
    "cache": [
        {"type": "STRIPE", "disks": ["da4"]}
    ],
    "log": [
        {"type": "STRIPE", "disks": ["da5"]}
    ],
    "spares": ["da6"]
}

Create a pool named "tank", raidz1 with 3 disks, 1 cache disk, 1 ZIL/log disk and 1 hot spare disk.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.create",
    "params": [{
        "name": "tank",
        "topology": {
            "data": [
                {"type": "RAIDZ1", "disks": ["da1", "da2", "da3"]}
            ],
            "cache": [
                {"type": "STRIPE", "disks": ["da4"]}
            ],
            "log": [
                {"type": "RAIDZ1", "disks": ["da5"]}
            ],
            "spares": ["da6"]
        }
    }]
}
pool.detach
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "label": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Detach a disk from pool of id id.

label is the vdev guid or device name.

Detach ZFS device.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.detach,
    "params": [1, {
        "label": "80802394992848654"
    }]
}
pool.download_encryption_key
Arguments:
{ "type": "integer", "title": "id" }
{ "title": "filename", "default": "geli.key", "type": "string" }

Download encryption key for a given pool id.

pool.expand
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "geli": { "type": "object", "properties": { "passphrase": { "type": "string" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "options", "default": {} }

Expand pool to fit all available disk space.

pool.export
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "cascade": { "type": "boolean" }, "restart_services": { "type": "boolean" }, "destroy": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Export pool of id.

cascade will delete all attachments of the given pool (pool.attachments). restart_services will restart services that have open files on given pool. destroy will also PERMANENTLY destroy the pool/data.

Export pool of id 1.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.export,
    "params": [1, {
        "cascade": true,
        "destroy": false
    }]
}
pool.filesystem_choices
Arguments:
{ "type": "array", "title": "types", "default": [ "FILESYSTEM", "VOLUME" ], "items": [ { "type": "string" } ] }

Returns all available datasets, except system datasets.

Get all datasets.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.filesystem_choices",
    "params": []
}

Get only filesystems (exclude volumes).

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.filesystem_choices",
    "params": [["FILESYSTEM"]]
}
pool.get_disks
Arguments:
{ "type": [ "integer", "null" ], "title": "id", "default": null }

Get all disks in use by pools. If id is provided only the disks from the given pool id will be returned.

pool.import_disk
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "device", "type": "string" }
{ "title": "fs_type", "type": "string" }
{ "type": "object", "properties": {}, "additionalProperties": true, "title": "fs_options", "default": {} }
{ "title": "dst_path", "type": "string" }

Import a disk, by copying its content to a pool.

Import a FAT32 (msdosfs) disk.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.import_disk,
    "params": [
        "/dev/da0", "msdosfs", {}, "/mnt/tank/mydisk"
    ]
}
pool.import_disk_autodetect_fs_type
Arguments:
{ "title": "device", "type": "string" }

Autodetect filesystem type for pool.import_disk.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.import_disk_autodetect_fs_type",
    "params": ["/dev/da0"]
}
pool.import_disk_msdosfs_locales

Get a list of locales for msdosfs type to be used in pool.import_disk.

pool.import_find
Job This endpoint is a Job. Please refer to the Jobs section for details.

Returns a job id which can be used to retrieve a list of pools available for import with the following details as a result of the job: name, guid, status, hostname.

pool.import_pool
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "object", "properties": { "guid": { "type": "string" }, "name": { "type": "string" }, "passphrase": { "type": "string" }, "enable_attachments": { "type": "boolean" } }, "additionalProperties": false, "title": "pool_import", "default": {} }

Import a pool found with pool.import_find.

If a name is specified the pool will be imported using that new name.

passphrase is required while importing an encrypted pool. In that case this method needs to be called using /_upload/ endpoint with the encryption key.

If enable_attachments is set to true, attachments that were disabled during pool export will be re-enabled.

Errors: ENOENT - Pool not found

Import pool of guid 5571830764813710860.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.import_pool,
    "params": [{
        "guid": "5571830764813710860"
    }]
}
pool.is_upgraded
Arguments:
{ "type": "integer", "title": "id" }

Returns whether or not the pool of id is on the latest version and with all feature flags enabled.

Check if pool of id 1 is upgraded.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.is_upgraded",
    "params": [1]
}
pool.lock
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "title": "passphrase", "type": "string" }

Lock encrypted pool id.

pool.offline
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "label": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Offline a disk from pool of id id.

label is the vdev guid or device name.

Offline ZFS device.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.offline,
    "params": [1, {
        "label": "80802394992848654"
    }]
}
pool.online
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "label": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Online a disk from pool of id id.

label is the vdev guid or device name.

Online ZFS device.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.online,
    "params": [1, {
        "label": "80802394992848654"
    }]
}
pool.passphrase
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "passphrase": { "type": [ "string", "null" ] }, "admin_password": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Create/Change/Remove passphrase for an encrypted pool.

Setting passphrase to null will remove the passphrase. admin_password is required when changing or removing passphrase.

Change passphrase for pool 1.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.passphrase,
    "params": [1, {
        "passphrase": "mysecretpassphrase",
        "admin_password": "rootpassword"
    }]
}
pool.processes
Arguments:
{ "type": "integer", "title": "id" }

Returns a list of running processes using this pool.

pool.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
pool.recoverykey_add
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be downloaded from this endpoint. Please refer to the Jobs section to download a file.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "admin_password": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Add Recovery key for encrypted pool id.

This is to be used with core.download which will provide an URL to download the recovery key.

pool.recoverykey_rm
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "admin_password": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Remove recovery key for encrypted pool id.

Remove recovery key for pool 1.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.recoverykey_rm,
    "params": [1, {
        "admin_password": "rootpassword"
    }]
}
pool.rekey
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "admin_password": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Rekey encrypted pool id.

Rekey pool 1.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.rekey,
    "params": [1, {
        "admin_password": "rootpassword"
    }]
}
pool.remove
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "label": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Remove a disk from pool of id id.

label is the vdev guid or device name.

Error codes:

EZFS_NOSPC(2032): out of space to remove a device
EZFS_NODEVICE(2017): no such device in pool
EZFS_NOREPLICAS(2019): no valid replicas

Remove ZFS device.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.remove,
    "params": [1, {
        "label": "80802394992848654"
    }]
}
pool.replace
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "label": { "type": "string" }, "disk": { "type": "string" }, "force": { "type": "boolean" }, "passphrase": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Replace a disk on a pool.

label is the ZFS guid or a device name disk is the identifier of a disk passphrase is only valid for TrueNAS Core/Enterprise platform where pool is GELI encrypted

Replace missing ZFS device with disk {serial}FOO.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.replace",
    "params": [1, {
        "label": "80802394992848654",
        "disk": "{serial}FOO"
    }]
}
pool.scrub
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "title": "action", "type": "string", "enum": [ "START", "STOP", "PAUSE" ] }

Performs a scrub action to pool of id.

action can be either of "START", "STOP" or "PAUSE".

Start scrub on pool of id 1.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.scrub",
    "params": [1, "START"]
}
pool.unlock
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "passphrase": { "type": "string" }, "recoverykey": { "type": "boolean" }, "services_restart": { "type": "array", "items": [ { "type": "null" } ] } }, "additionalProperties": false, "title": "pool_unlock_options", "default": {} }

Unlock encrypted pool id.

passphrase is required of a recovery key is not provided.

If recoverykey is true this method expects the recovery key file to be uploaded using the /_upload/ endpoint.

services_restart is a list of services to be restarted when the pool gets unlocked. Said list be be retrieve using pool.unlock_services_restart_choices.

Unlock pool of id 1, restarting "cifs" service.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.unlock,
    "params": [1, {
        "passphrase": "mysecretpassphrase",
        "services_restart": ["cifs"]
    }]
}
pool.unlock_services_restart_choices
Arguments:
{ "type": "integer", "title": "id" }

Get a mapping of services identifiers and labels that can be restart on volume unlock.

pool.update
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "encryption_options": { "type": "object", "properties": { "generate_key": { "type": "boolean" }, "pbkdf2iters": { "type": "integer" }, "algorithm": { "type": "string", "enum": [ "AES-128-CCM", "AES-192-CCM", "AES-256-CCM", "AES-128-GCM", "AES-192-GCM", "AES-256-GCM" ] }, "passphrase": { "type": [ "string", "null" ] }, "key": { "type": [ "string", "null" ] } }, "additionalProperties": false }, "topology": { "type": "object", "properties": { "data": { "type": "array", "items": [ { "type": "object" } ] }, "special": { "type": "array", "items": [ { "type": "object" } ] }, "dedup": { "type": "array", "items": [ { "type": "object" } ] }, "cache": { "type": "array", "items": [ { "type": "object" } ] }, "log": { "type": "array", "items": [ { "type": "object" } ] }, "spares": { "type": "array", "items": [ { "type": "string" } ] } }, "additionalProperties": false }, "autotrim": { "type": "string", "enum": [ "ON", "OFF" ] } }, "additionalProperties": false, "title": "pool_create", "default": {} }

Update pool of id, adding the new topology.

The type of data must be the same of existing vdevs.

Add a new set of raidz1 to pool of id 1.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.update",
    "params": [1, {
        "topology": {
            "data": [
                {"type": "RAIDZ1", "disks": ["da7", "da8", "da9"]}
            ]
        }
    }]
}
pool.upgrade
Arguments:
{ "type": "integer", "title": "id" }

Upgrade pool of id to latest version with all feature flags.

Upgrade pool of id 1.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.upgrade",
    "params": [1]
}

pool.dataset

pool.dataset.attachments
Arguments:
{ "title": "id", "type": "string" }

Return a list of services dependent of this dataset.

Responsible for telling the user whether there is a related share, asking for confirmation.

Example return value: [ { "type": "NFS Share", "service": "nfs", "attachments": ["/mnt/tank/work"] } ]

pool.dataset.change_key
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "generate_key": { "type": "boolean" }, "key_file": { "type": "boolean" }, "pbkdf2iters": { "type": "integer" }, "passphrase": { "type": [ "string", "null" ] }, "key": { "type": [ "string", "null" ] } }, "additionalProperties": false, "title": "change_key_options", "default": {} }

Change encryption properties for id encrypted dataset.

Changing dataset encryption to use passphrase instead of a key is not allowed if:

1) It has encrypted roots as children which are encrypted with a key 2) If it is a root dataset where the system dataset is located

pool.dataset.compression_choices

Retrieve compression algorithm supported by ZFS.

pool.dataset.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "type": { "type": "string", "enum": [ "FILESYSTEM", "VOLUME" ] }, "volsize": { "type": "integer" }, "volblocksize": { "type": "string", "enum": [ "512", "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K" ] }, "sparse": { "type": "boolean" }, "force_size": { "type": "boolean" }, "comments": { "type": "string" }, "sync": { "type": "string", "enum": [ "STANDARD", "ALWAYS", "DISABLED" ] }, "compression": { "type": "string", "enum": [ "OFF", "LZ4", "GZIP", "GZIP-1", "GZIP-9", "ZSTD", "ZSTD-FAST", "ZLE", "LZJB", "ZSTD-1", "ZSTD-2", "ZSTD-3", "ZSTD-4", "ZSTD-5", "ZSTD-6", "ZSTD-7", "ZSTD-8", "ZSTD-9", "ZSTD-10", "ZSTD-11", "ZSTD-12", "ZSTD-13", "ZSTD-14", "ZSTD-15", "ZSTD-16", "ZSTD-17", "ZSTD-18", "ZSTD-19", "ZSTD-FAST-1", "ZSTD-FAST-2", "ZSTD-FAST-3", "ZSTD-FAST-4", "ZSTD-FAST-5", "ZSTD-FAST-6", "ZSTD-FAST-7", "ZSTD-FAST-8", "ZSTD-FAST-9", "ZSTD-FAST-10", "ZSTD-FAST-20", "ZSTD-FAST-30", "ZSTD-FAST-40", "ZSTD-FAST-50", "ZSTD-FAST-60", "ZSTD-FAST-70", "ZSTD-FAST-80", "ZSTD-FAST-90", "ZSTD-FAST-100", "ZSTD-FAST-500", "ZSTD-FAST-1000" ] }, "atime": { "type": "string", "enum": [ "ON", "OFF" ] }, "exec": { "type": "string", "enum": [ "ON", "OFF" ] }, "managedby": { "type": "string" }, "quota": { "type": [ "integer", "null" ] }, "quota_warning": { "type": "integer" }, "quota_critical": { "type": "integer" }, "refquota": { "type": [ "integer", "null" ] }, "refquota_warning": { "type": "integer" }, "refquota_critical": { "type": "integer" }, "reservation": { "type": "integer" }, "refreservation": { "type": "integer" }, "special_small_block_size": { "type": "integer" }, "copies": { "type": "integer" }, "snapdir": { "type": "string", "enum": [ "VISIBLE", "HIDDEN" ] }, "deduplication": { "type": "string", "enum": [ "ON", "VERIFY", "OFF" ] }, "readonly": { "type": "string", "enum": [ "ON", "OFF" ] }, "recordsize": { "type": "string", "enum": [ "512", "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", "1024K" ] }, "casesensitivity": { "type": "string", "enum": [ "SENSITIVE", "INSENSITIVE", "MIXED" ] }, "aclmode": { "type": "string", "enum": [ "PASSTHROUGH", "RESTRICTED" ] }, "acltype": { "type": "string", "enum": [ "NOACL", "NFS4ACL", "POSIXACL" ] }, "share_type": { "type": "string", "enum": [ "GENERIC", "SMB" ] }, "xattr": { "type": "string", "enum": [ "ON", "SA" ] }, "encryption_options": { "type": "object", "properties": { "generate_key": { "type": "boolean" }, "pbkdf2iters": { "type": "integer" }, "algorithm": { "type": "string", "enum": [ "AES-128-CCM", "AES-192-CCM", "AES-256-CCM", "AES-128-GCM", "AES-192-GCM", "AES-256-GCM" ] }, "passphrase": { "type": [ "string", "null" ] }, "key": { "type": [ "string", "null" ] } }, "additionalProperties": false }, "encryption": { "type": "boolean" }, "inherit_encryption": { "type": "boolean" } }, "additionalProperties": false, "title": "pool_dataset_create", "default": {} }

Creates a dataset/zvol.

volsize is required for type=VOLUME and is supposed to be a multiple of the block size. sparse and volblocksize are only used for type=VOLUME.

encryption when enabled will create an ZFS encrypted root dataset for name pool. There are 2 cases where ZFS encryption is not allowed for a dataset: 1) Pool in question is GELI encrypted. 2) If the parent dataset is encrypted with a passphrase and name is being created with a key for encrypting the dataset.

encryption_options specifies configuration for encryption of dataset for name pool. encryption_options.passphrase must be specified if encryption for dataset is desired with a passphrase as a key. Otherwise a hex encoded key can be specified by providing encryption_options.key. encryption_options.generate_key when enabled automatically generates the key to be used for dataset encryption.

It should be noted that keys are stored by the system for automatic locking/unlocking on import/export of encrypted datasets. If that is not desired, dataset should be created with a passphrase as a key.

Create a dataset within tank pool.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.dataset.create,
    "params": [{
        "name": "tank/myuser",
        "comments": "Dataset for myuser"
    }]
}
pool.dataset.delete
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "recursive": { "type": "boolean" }, "force": { "type": "boolean" } }, "additionalProperties": false, "title": "dataset_delete", "default": {} }

Delete dataset/zvol id.

recursive will also delete/destroy all children datasets. force will force delete busy datasets.

Delete "tank/myuser" dataset.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.dataset.delete",
    "params": ["tank/myuser"]
}
pool.dataset.encryption_algorithm_choices

Retrieve encryption algorithms supported for ZFS dataset encryption.

pool.dataset.encryption_summary
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "key_file": { "type": "boolean" }, "datasets": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "encryption_root_summary_options", "default": {} }

Retrieve summary of all encrypted roots under id.

Keys/passphrase can be supplied to check if the keys are valid.

It should be noted that there are 2 keys which show if a recursive unlock operation is done for id, which dataset will be unlocked and if not why it won't be unlocked. The keys namely are "unlock_successful" and "unlock_error". The former is a boolean value showing if unlock would succeed/fail. The latter is description why it failed if it failed.

If a dataset is already unlocked, it will show up as true for "unlock_successful" regardless of what key user provided as the unlock keys in the output are to reflect what a real unlock operation would behave. If user is interested in seeing if a provided key is valid or not, then the key to look out for in the output is "valid_key" which based on what system has in database or if a user provided one, validates the key and sets a boolean value for the dataset.

Example output: [ { "name": "vol", "key_format": "PASSPHRASE", "key_present_in_database": false, "valid_key": true, "locked": true, "unlock_error": null, "unlock_successful": true }, { "name": "vol/c1/d1", "key_format": "PASSPHRASE", "key_present_in_database": false, "valid_key": false, "locked": true, "unlock_error": "Provided key is invalid", "unlock_successful": false }, { "name": "vol/c", "key_format": "PASSPHRASE", "key_present_in_database": false, "valid_key": false, "locked": true, "unlock_error": "Key not provided", "unlock_successful": false }, { "name": "vol/c/d2", "key_format": "PASSPHRASE", "key_present_in_database": false, "valid_key": false, "locked": true, "unlock_error": "Child cannot be unlocked when parent "vol/c" is locked and provided key is invalid", "unlock_successful": false } ]

pool.dataset.export_key
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be downloaded from this endpoint. Please refer to the Jobs section to download a file.
Arguments:
{ "title": "id", "type": "string" }
{ "type": "boolean", "title": "download", "default": false }

Export own encryption key for dataset id. If download is true, key will be downloaded as a text file, otherwise it will be returned as string.

Please refer to websocket documentation for downloading the file.

pool.dataset.export_keys
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be downloaded from this endpoint. Please refer to the Jobs section to download a file.
Arguments:
{ "title": "id", "type": "string" }

Export keys for id and its children which are stored in the system. The exported file is a JSON file which has a dictionary containing dataset names as keys and their keys as the value.

Please refer to websocket documentation for downloading the file.

pool.dataset.get_quota
Arguments:
{ "title": "ds", "type": "string" }
{ "title": "quota_type", "type": "string", "enum": [ "USER", "GROUP", "DATASET" ] }
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Return a list of the specified quota_type of quotas on the ZFS dataset ds. Support query-filters and query-options. used_bytes and used_percentage may not instantly update as space is used.

When quota_type is not DATASET, each quota entry has these fields:

id - the uid or gid to which the quota applies.

name - the user or group name to which the quota applies. Value is null if the id in the quota cannot be resolved to a user or group. This indicates that the user or group does not exist on the server.

quota - the quota size in bytes.

used_bytes - the amount of bytes the user has written to the dataset. A value of zero means unlimited.

used_percentage - the percentage of the user or group quota consumed.

obj_quota - the number of objects that may be owned by id. A value of zero means unlimited.

'obj_used- the nubmer of objects currently owned byid`.

obj_used_percent - the percentage of the obj_quota currently used.

pool.dataset.inherit_parent_encryption_properties
Arguments:
{ "title": "id", "type": "string" }

Allows inheriting parent's encryption root discarding its current encryption settings. This can only be done where id has an encrypted parent and id itself is an encryption root.

pool.dataset.lock
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "force_umount": { "type": "boolean" } }, "additionalProperties": false, "title": "lock_options", "default": {} }

Locks id dataset. It will unmount the dataset and its children before locking.

pool.dataset.permission
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "user": { "type": "string" }, "group": { "type": "string" }, "mode": { "type": [ "string", "null" ] }, "acl": { "type": "array", "items": [ { "type": "object" } ] }, "options": { "type": "object", "properties": { "stripacl": { "type": "boolean" }, "recursive": { "type": "boolean" }, "traverse": { "type": "boolean" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "pool_dataset_permission", "default": {} }

Set permissions for a dataset id. Permissions may be specified as either a posix mode or an nfsv4 acl. Setting mode will fail if the dataset has an existing nfsv4 acl. In this case, the option stripacl must be set to True.

Change permissions of dataset "tank/myuser" to myuser:wheel and 755.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.dataset.permission",
    "params": ["tank/myuser", {
        "user": "myuser",
        "acl": [],
        "group": "wheel",
        "mode": "755",
        "options": {"recursive": true, "stripacl": true},
    }]
}
pool.dataset.processes
Arguments:
{ "title": "id", "type": "string" }

Return a list of processes using this dataset.

Example return value:

[ { "pid": 2520, "name": "smbd", "service": "cifs" }, { "pid": 97778, "name": "minio", "cmdline": "/usr/local/bin/minio -C /usr/local/etc/minio server --address=0.0.0.0:9000 --quiet /mnt/tank/wk" } ]

pool.dataset.promote
Arguments:
{ "title": "id", "type": "string" }

Promote the cloned dataset id.

pool.dataset.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query Pool Datasets with query-filters and query-options.

We provide two ways to retrieve datasets. The first is a flat structure (default), where all datasets in the system are returned as separate objects which contain all data there is for their children. This retrieval type is slightly slower because of duplicates in each object. The second type is hierarchical, where only top level datasets are returned in the list. They contain all the children in the children key. This retrieval type is slightly faster. These options are controlled by the query-options.extra.flat attribute (default true).

pool.dataset.recommended_zvol_blocksize
Arguments:
{ "title": "pool", "type": "string" }

Helper method to get recommended size for a new zvol (dataset of type VOLUME).

Get blocksize for pool "tank".

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.dataset.recommended_zvol_blocksize",
    "params": ["tank"]
}
pool.dataset.set_quota
Arguments:
{ "title": "ds", "type": "string" }
{ "type": "array", "title": "quotas", "default": [ { "quota_type": "USER", "id": "0", "quota_value": 0 } ], "items": [ { "type": "object" } ] }

There are three over-arching types of quotas for ZFS datasets. 1) dataset quotas and refquotas. If a DATASET quota type is specified in this API call, then the API acts as a wrapper for pool.dataset.update.

2) User and group quotas. These limit the amount of disk space consumed by files that are owned by the specified users or groups. If the respective "object quota" type is specfied, then the quota limits the number of objects that may be owned by the specified user or group.

3) Project quotas. These limit the amount of disk space consumed by files that are owned by the specified project. Project quotas are not yet implemended.

This API allows users to set multiple quotas simultaneously by submitting a list of quotas. The list may contain all supported quota types.

ds the name of the target ZFS dataset.

quotas specifies a list of quota_entry entries to apply to dataset.

quota_entry entries have these required parameters:

quota_type: specifies the type of quota to apply to the dataset. Possible values are USER, USEROBJ, GROUP, GROUPOBJ, and DATASET. USEROBJ and GROUPOBJ quotas limit the number of objects consumed by the specified user or group.

id: the uid, gid, or name to which the quota applies. If quota_type is 'DATASET', then id must be either QUOTA or REFQUOTA.

quota_value: the quota size in bytes. Setting a value of 0 removes the user or group quota.

pool.dataset.unlock
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "key_file": { "type": "boolean" }, "recursive": { "type": "boolean" }, "toggle_attachments": { "type": "boolean" }, "datasets": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "unlock_options", "default": {} }

Unlock id dataset.

If id dataset is not encrypted an exception will be raised. There is one exception: when id is a root dataset and unlock_options.recursive is specified, encryption validation will not be performed for id. This allow unlocking encrypted children the id pool.

For datasets which are encrypted with a passphrase, include the passphrase with unlock_options.datasets.

Uploading a json file which contains encrypted dataset keys can be specified with unlock_options.key_file. The format is similar to that used for exporting encrypted dataset keys.

pool.dataset.update
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "volsize": { "type": "integer" }, "force_size": { "type": "boolean" }, "comments": { "type": "string" }, "sync": { "type": "string", "enum": [ "STANDARD", "ALWAYS", "DISABLED", "INHERIT" ] }, "compression": { "type": "string", "enum": [ "OFF", "LZ4", "GZIP", "GZIP-1", "GZIP-9", "ZSTD", "ZSTD-FAST", "ZLE", "LZJB", "ZSTD-1", "ZSTD-2", "ZSTD-3", "ZSTD-4", "ZSTD-5", "ZSTD-6", "ZSTD-7", "ZSTD-8", "ZSTD-9", "ZSTD-10", "ZSTD-11", "ZSTD-12", "ZSTD-13", "ZSTD-14", "ZSTD-15", "ZSTD-16", "ZSTD-17", "ZSTD-18", "ZSTD-19", "ZSTD-FAST-1", "ZSTD-FAST-2", "ZSTD-FAST-3", "ZSTD-FAST-4", "ZSTD-FAST-5", "ZSTD-FAST-6", "ZSTD-FAST-7", "ZSTD-FAST-8", "ZSTD-FAST-9", "ZSTD-FAST-10", "ZSTD-FAST-20", "ZSTD-FAST-30", "ZSTD-FAST-40", "ZSTD-FAST-50", "ZSTD-FAST-60", "ZSTD-FAST-70", "ZSTD-FAST-80", "ZSTD-FAST-90", "ZSTD-FAST-100", "ZSTD-FAST-500", "ZSTD-FAST-1000", "INHERIT" ] }, "atime": { "type": "string", "enum": [ "ON", "OFF", "INHERIT" ] }, "exec": { "type": "string", "enum": [ "ON", "OFF", "INHERIT" ] }, "managedby": { "type": "string" }, "quota": { "type": [ "integer", "null" ] }, "quota_warning": { "nullable": false, "anyOf": [ { "type": "integer" }, { "type": "string", "enum": [ "INHERIT" ] } ] }, "quota_critical": { "nullable": false, "anyOf": [ { "type": "integer" }, { "type": "string", "enum": [ "INHERIT" ] } ] }, "refquota": { "type": [ "integer", "null" ] }, "refquota_warning": { "nullable": false, "anyOf": [ { "type": "integer" }, { "type": "string", "enum": [ "INHERIT" ] } ] }, "refquota_critical": { "nullable": false, "anyOf": [ { "type": "integer" }, { "type": "string", "enum": [ "INHERIT" ] } ] }, "reservation": { "type": "integer" }, "refreservation": { "type": "integer" }, "special_small_block_size": { "type": "integer" }, "copies": { "type": "integer" }, "snapdir": { "type": "string", "enum": [ "VISIBLE", "HIDDEN", "INHERIT" ] }, "deduplication": { "type": "string", "enum": [ "ON", "VERIFY", "OFF", "INHERIT" ] }, "readonly": { "type": "string", "enum": [ "ON", "OFF", "INHERIT" ] }, "recordsize": { "type": "string", "enum": [ "512", "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", "1024K", "INHERIT" ] }, "aclmode": { "type": "string", "enum": [ "PASSTHROUGH", "RESTRICTED" ] }, "acltype": { "type": "string", "enum": [ "NOACL", "NFS4ACL", "POSIXACL" ] }, "xattr": { "type": "string", "enum": [ "ON", "SA" ] } }, "additionalProperties": false, "title": "pool_dataset_create", "default": {} }

Updates a dataset/zvol id.

Update the comments for "tank/myuser".

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.dataset.update,
    "params": ["tank/myuser", {
        "comments": "Dataset for myuser, UPDATE #1"
    }]
}

pool.dataset.userprop

pool.dataset.userprop.create
Arguments:
{ "type": "object", "properties": { "id": { "type": "string" }, "property": { "type": "object", "properties": { "name": { "type": "string" }, "value": { "type": "string" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "dataset_user_prop_create", "default": {} }

Create a user property for a given id dataset.

pool.dataset.userprop.delete
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "name": { "type": "string" } }, "additionalProperties": false, "title": "dataset_user_prop_delete", "default": {} }

Delete user property dataset_user_prop_delete.name for id dataset.

pool.dataset.userprop.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query all user properties for ZFS datasets.

pool.dataset.userprop.update
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "name": { "type": "string" }, "value": { "type": "string" } }, "additionalProperties": false, "title": "dataset_user_prop_update", "default": {} }

Update dataset_user_prop_update.name user property for id dataset.

pool.resilver

pool.resilver.config
-
pool.resilver.update
Arguments:
{ "type": "object", "properties": { "begin": { "type": "string" }, "end": { "type": "string" }, "enabled": { "type": "boolean" }, "weekday": { "type": "array", "items": [ { "type": "integer" } ] } }, "additionalProperties": false, "title": "pool_resilver", "default": {} }

Configure Pool Resilver Priority.

If begin time is greater than end time it means it will rollover the day, e.g. begin = "19:00", end = "05:00" will increase pool resilver priority from 19:00 of one day until 05:00 of the next day.

weekday follows crontab(5) values 0-7 (0 or 7 is Sun).

Enable pool resilver priority all business days from 7PM to 5AM.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.resilver.update",
    "params": [{
        "enabled": true,
        "begin": "19:00",
        "end": "05:00",
        "weekday": [1, 2, 3, 4, 5]
    }]
}

pool.scrub

pool.scrub.create
Arguments:
{ "type": "object", "properties": { "pool": { "type": "integer" }, "threshold": { "type": "integer" }, "description": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "pool_scrub_create", "default": {} }

Create a scrub task for a pool.

threshold refers to the minimum amount of time in days has to be passed before a scrub can run again.

Create a scrub task for pool of id 1, to run every sunday but with a threshold of 35 days. The check will run at 3AM every sunday.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.scrub.create"
    "params": [{
        "pool": 1,
        "threshold": 35,
        "description": "Monthly scrub for tank",
        "schedule": "0 3 * * 7",
        "enabled": true
    }]
}
pool.scrub.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete scrub task of id.

pool.scrub.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
pool.scrub.run
Arguments:
{ "title": "name", "type": "string" }
{ "type": "integer", "title": "threshold", "default": 35 }

Initiate a scrub of a pool name if last scrub was performed more than threshold days before.

pool.scrub.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "pool": { "type": "integer" }, "threshold": { "type": "integer" }, "description": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "pool_scrub_create", "default": {} }

Update scrub task of id.

pool.snapshottask

pool.snapshottask.create
Arguments:
{ "type": "object", "properties": { "dataset": { "type": "string" }, "recursive": { "type": "boolean" }, "exclude": { "type": "array", "items": [ { "type": "string" } ] }, "lifetime_value": { "type": "integer" }, "lifetime_unit": { "type": "string", "enum": [ "HOUR", "DAY", "WEEK", "MONTH", "YEAR" ] }, "naming_schema": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" }, "begin": { "type": "string" }, "end": { "type": "string" } }, "additionalProperties": false }, "allow_empty": { "type": "boolean" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "periodic_snapshot_create", "default": {} }

Create a Periodic Snapshot Task

Create a Periodic Snapshot Task that will take snapshots of specified dataset at specified schedule. Recursive snapshots can be created if recursive flag is enabled. You can exclude specific child datasets or zvols from the snapshot. Snapshots will be automatically destroyed after a certain amount of time, specified by lifetime_value and lifetime_unit. If multiple periodic tasks create snapshots at the same time (for example hourly and daily at 00:00) the snapshot will be kept until the last of these tasks reaches its expiry time. Snapshots will be named according to naming_schema which is a strftime-like template for snapshot name and must contain %Y, %m, %d, %H and %M.

Create a recursive Periodic Snapshot Task for dataset data/work excluding data/work/temp. Snapshots will be created on weekdays every hour from 09:00 to 18:00 and will be stored for two weeks.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.snapshottask.create",
    "params": [{
        "dataset": "data/work",
        "recursive": true,
        "exclude": ["data/work/temp"],
        "lifetime_value": 2,
        "lifetime_unit": "WEEK",
        "naming_schema": "auto_%Y-%m-%d_%H-%M",
        "schedule": {
            "minute": "0",
            "hour": "*",
            "dom": "*",
            "month": "*",
            "dow": "1,2,3,4,5",
            "begin": "09:00",
            "end": "18:00"
        }
    }]
}
pool.snapshottask.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete a Periodic Snapshot Task with specific id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.snapshottask.delete",
    "params": [
        1
    ]
}
pool.snapshottask.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
pool.snapshottask.run
Arguments:
{ "type": "integer", "title": "id" }

Execute a Periodic Snapshot Task of id.

pool.snapshottask.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "dataset": { "type": "string" }, "recursive": { "type": "boolean" }, "exclude": { "type": "array", "items": [ { "type": "string" } ] }, "lifetime_value": { "type": "integer" }, "lifetime_unit": { "type": "string", "enum": [ "HOUR", "DAY", "WEEK", "MONTH", "YEAR" ] }, "naming_schema": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" }, "begin": { "type": "string" }, "end": { "type": "string" } }, "additionalProperties": false }, "allow_empty": { "type": "boolean" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "periodic_snapshot_create", "default": {} }

Update a Periodic Snapshot Task with specific id

See the documentation for create method for information on payload contents

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "pool.snapshottask.update",
    "params": [
        1,
        {
            "dataset": "data/work",
            "recursive": true,
            "exclude": ["data/work/temp"],
            "lifetime_value": 2,
            "lifetime_unit": "WEEK",
            "naming_schema": "auto_%Y-%m-%d_%H-%M",
            "schedule": {
                "minute": "0",
                "hour": "*",
                "dom": "*",
                "month": "*",
                "dow": "1,2,3,4,5",
                "begin": "09:00",
                "end": "18:00"
            }
        }
    ]
}

replication

replication.count_eligible_manual_snapshots
Arguments:
{ "type": "array", "title": "datasets", "items": [ { "type": "string" } ] }
{ "type": "array", "title": "naming_schema", "items": [ { "type": "string" } ] }
{ "title": "transport", "type": "string", "enum": [ "SSH", "SSH+NETCAT", "LOCAL" ] }
{ "type": [ "integer", "null" ], "title": "ssh_credentials", "default": null }

Count how many existing snapshots of dataset match naming_schema.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "replication.count_eligible_manual_snapshots",
    "params": [
        "repl/work",
        ["auto-%Y-%m-%d_%H-%M"],
        "SSH",
        4
    ]
}
replication.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "direction": { "type": "string", "enum": [ "PUSH", "PULL" ] }, "transport": { "type": "string", "enum": [ "SSH", "SSH+NETCAT", "LOCAL" ] }, "ssh_credentials": { "type": [ "integer", "null" ] }, "netcat_active_side": { "type": [ "string", "null" ], "enum": [ "LOCAL", "REMOTE" ] }, "netcat_active_side_listen_address": { "type": [ "string", "null" ] }, "netcat_active_side_port_min": { "type": [ "integer", "null" ] }, "netcat_active_side_port_max": { "type": [ "integer", "null" ] }, "netcat_passive_side_connect_address": { "type": [ "string", "null" ] }, "source_datasets": { "type": "array", "items": [ { "type": "string" } ] }, "target_dataset": { "type": "string" }, "recursive": { "type": "boolean" }, "exclude": { "type": "array", "items": [ { "type": "string" } ] }, "properties": { "type": "boolean" }, "properties_exclude": { "type": "array", "items": [ { "type": "string" } ] }, "replicate": { "type": "boolean" }, "encryption": { "type": "boolean" }, "encryption_key": { "type": [ "string", "null" ] }, "encryption_key_format": { "type": [ "string", "null" ], "enum": [ "HEX", "PASSPHRASE" ] }, "encryption_key_location": { "type": [ "string", "null" ] }, "periodic_snapshot_tasks": { "type": "array", "items": [ { "type": "integer" } ] }, "naming_schema": { "type": "array", "items": [ { "type": "string" } ] }, "also_include_naming_schema": { "type": "array", "items": [ { "type": "string" } ] }, "auto": { "type": "boolean" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" }, "begin": { "type": "string" }, "end": { "type": "string" } }, "additionalProperties": false }, "restrict_schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" }, "begin": { "type": "string" }, "end": { "type": "string" } }, "additionalProperties": false }, "only_matching_schedule": { "type": "boolean" }, "allow_from_scratch": { "type": "boolean" }, "readonly": { "type": "string", "enum": [ "SET", "REQUIRE", "IGNORE" ] }, "hold_pending_snapshots": { "type": "boolean" }, "retention_policy": { "type": "string", "enum": [ "SOURCE", "CUSTOM", "NONE" ] }, "lifetime_value": { "type": [ "integer", "null" ] }, "lifetime_unit": { "type": [ "string", "null" ], "enum": [ "HOUR", "DAY", "WEEK", "MONTH", "YEAR" ] }, "compression": { "type": [ "string", "null" ], "enum": [ "LZ4", "PIGZ", "PLZIP" ] }, "speed_limit": { "type": [ "integer", "null" ] }, "large_block": { "type": "boolean" }, "embed": { "type": "boolean" }, "compressed": { "type": "boolean" }, "retries": { "type": "integer" }, "logging_level": { "type": [ "string", "null" ], "enum": [ "DEBUG", "INFO", "WARNING", "ERROR" ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "replication_create", "default": {} }

Create a Replication Task

Create a Replication Task that will push or pull ZFS snapshots to or from remote host..

  • name specifies a name for replication task
  • direction specifies whether task will PUSH or PULL snapshots
  • transport is a method of snapshots transfer:
  • SSH transfers snapshots via SSH connection. This method is supported everywhere but does not achieve great performance ssh_credentials is a required field for this transport (Keychain Credential ID of type SSH_CREDENTIALS)
  • SSH+NETCAT uses unencrypted connection for data transfer. This can only be used in trusted networks and requires a port (specified by range from netcat_active_side_port_min to netcat_active_side_port_max) to be open on netcat_active_side ssh_credentials is also required for control connection
  • LOCAL replicates to or from localhost
  • source_datasets is a non-empty list of datasets to replicate snapshots from
  • target_dataset is a dataset to put snapshots into. It must exist on target side
  • recursive and exclude have the same meaning as for Periodic Snapshot Task
  • properties control whether we should send dataset properties along with snapshots
  • periodic_snapshot_tasks is a list of periodic snapshot task IDs that are sources of snapshots for this replication task. Only push replication tasks can be bound to periodic snapshot tasks.
  • naming_schema is a list of naming schemas for pull replication
  • also_include_naming_schema is a list of naming schemas for push replication
  • auto allows replication to run automatically on schedule or after bound periodic snapshot task
  • schedule is a schedule to run replication task. Only auto replication tasks without bound periodic snapshot tasks can have a schedule
  • restrict_schedule restricts when replication task with bound periodic snapshot tasks runs. For example, you can have periodic snapshot tasks that run every 15 minutes, but only run replication task every hour.
  • Enabling only_matching_schedule will only replicate snapshots that match schedule or restrict_schedule
  • allow_from_scratch will destroy all snapshots on target side and replicate everything from scratch if none of the snapshots on target side matches source snapshots
  • readonly controls destination datasets readonly property:
  • SET will set all destination datasets to readonly=on after finishing the replication
  • REQUIRE will require all existing destination datasets to have readonly=on property
  • IGNORE will avoid this kind of behavior
  • hold_pending_snapshots will prevent source snapshots from being deleted by retention of replication fails for some reason
  • retention_policy specifies how to delete old snapshots on target side:
  • SOURCE deletes snapshots that are absent on source side
  • CUSTOM deletes snapshots that are older than lifetime_value and lifetime_unit
  • NONE does not delete any snapshots
  • compression compresses SSH stream. Available only for SSH transport
  • speed_limit limits speed of SSH stream. Available only for SSH transport
  • large_block, embed and compressed are various ZFS stream flag documented in man zfs send
  • retries specifies number of retries before considering replication failed
{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "replication.create",
    "params": [{
        "name": "Work Backup",
        "direction": "PUSH",
        "transport": "SSH",
        "ssh_credentials": [12],
        "source_datasets", ["data/work"],
        "target_dataset": "repl/work",
        "recursive": true,
        "periodic_snapshot_tasks": [5],
        "auto": true,
        "restrict_schedule": {
            "minute": "0",
            "hour": "*/2",
            "dom": "*",
            "month": "*",
            "dow": "1,2,3,4,5",
            "begin": "09:00",
            "end": "18:00"
        },
        "only_matching_schedule": true,
        "retention_policy": "CUSTOM",
        "lifetime_value": 1,
        "lifetime_unit": "WEEK",
    }]
}
replication.create_dataset
Arguments:
{ "title": "dataset", "type": "string" }
{ "title": "transport", "type": "string", "enum": [ "SSH", "SSH+NETCAT", "LOCAL" ] }
{ "type": [ "integer", "null" ], "title": "ssh_credentials", "default": null }

Creates dataset on remote side

Accepts dataset name, transport and SSH credentials ID (for non-local transport)

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "replication.create_dataset",
    "params": [
        "repl/work",
        "SSH",
        7
    ]
}
replication.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete a Replication Task with specific id

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "replication.delete",
    "params": [
        1
    ]
}
replication.list_datasets
Arguments:
{ "title": "transport", "type": "string", "enum": [ "SSH", "SSH+NETCAT", "LOCAL" ] }
{ "type": [ "integer", "null" ], "title": "ssh_credentials", "default": null }

List datasets on remote side

Accepts transport and SSH credentials ID (for non-local transport)

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "replication.list_datasets",
    "params": [
        "SSH",
        7
    ]
}
replication.list_naming_schemas

List all naming schemas used in periodic snapshot and replication tasks.

replication.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
replication.restore
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "target_dataset": { "type": "string" } }, "additionalProperties": false, "title": "replication_restore", "default": {} }

Create the opposite of replication task id (PULL if it was PUSH and vice versa).

replication.run
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }

Run Replication Task of id.

replication.target_unmatched_snapshots
Arguments:
{ "title": "direction", "type": "string", "enum": [ "PUSH", "PULL" ] }
{ "type": "array", "title": "source_datasets", "items": [ { "type": "string" } ] }
{ "title": "target_dataset", "type": "string" }
{ "title": "transport", "type": "string", "enum": [ "SSH", "SSH+NETCAT", "LOCAL", "LEGACY" ] }
{ "type": [ "integer", "null" ], "title": "ssh_credentials", "default": null }

Check if target has any snapshots that do not exist on source.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "replication.target_unmatched_snapshots",
    "params": [
        "PUSH",
        ["repl/work", "repl/games"],
        "backup",
        "SSH",
        4
    ]
}

Returns

{
    "backup/work": ["auto-2019-10-15_13-00", "auto-2019-10-15_09-00"],
    "backup/games": ["auto-2019-10-15_13-00"],
}
replication.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "direction": { "type": "string", "enum": [ "PUSH", "PULL" ] }, "transport": { "type": "string", "enum": [ "SSH", "SSH+NETCAT", "LOCAL" ] }, "ssh_credentials": { "type": [ "integer", "null" ] }, "netcat_active_side": { "type": [ "string", "null" ], "enum": [ "LOCAL", "REMOTE" ] }, "netcat_active_side_listen_address": { "type": [ "string", "null" ] }, "netcat_active_side_port_min": { "type": [ "integer", "null" ] }, "netcat_active_side_port_max": { "type": [ "integer", "null" ] }, "netcat_passive_side_connect_address": { "type": [ "string", "null" ] }, "source_datasets": { "type": "array", "items": [ { "type": "string" } ] }, "target_dataset": { "type": "string" }, "recursive": { "type": "boolean" }, "exclude": { "type": "array", "items": [ { "type": "string" } ] }, "properties": { "type": "boolean" }, "properties_exclude": { "type": "array", "items": [ { "type": "string" } ] }, "replicate": { "type": "boolean" }, "encryption": { "type": "boolean" }, "encryption_key": { "type": [ "string", "null" ] }, "encryption_key_format": { "type": [ "string", "null" ], "enum": [ "HEX", "PASSPHRASE" ] }, "encryption_key_location": { "type": [ "string", "null" ] }, "periodic_snapshot_tasks": { "type": "array", "items": [ { "type": "integer" } ] }, "naming_schema": { "type": "array", "items": [ { "type": "string" } ] }, "also_include_naming_schema": { "type": "array", "items": [ { "type": "string" } ] }, "auto": { "type": "boolean" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" }, "begin": { "type": "string" }, "end": { "type": "string" } }, "additionalProperties": false }, "restrict_schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" }, "begin": { "type": "string" }, "end": { "type": "string" } }, "additionalProperties": false }, "only_matching_schedule": { "type": "boolean" }, "allow_from_scratch": { "type": "boolean" }, "readonly": { "type": "string", "enum": [ "SET", "REQUIRE", "IGNORE" ] }, "hold_pending_snapshots": { "type": "boolean" }, "retention_policy": { "type": "string", "enum": [ "SOURCE", "CUSTOM", "NONE" ] }, "lifetime_value": { "type": [ "integer", "null" ] }, "lifetime_unit": { "type": [ "string", "null" ], "enum": [ "HOUR", "DAY", "WEEK", "MONTH", "YEAR" ] }, "compression": { "type": [ "string", "null" ], "enum": [ "LZ4", "PIGZ", "PLZIP" ] }, "speed_limit": { "type": [ "integer", "null" ] }, "large_block": { "type": "boolean" }, "embed": { "type": "boolean" }, "compressed": { "type": "boolean" }, "retries": { "type": "integer" }, "logging_level": { "type": [ "string", "null" ], "enum": [ "DEBUG", "INFO", "WARNING", "ERROR" ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "replication_create", "default": {} }

Update a Replication Task with specific id

See the documentation for create method for information on payload contents

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "replication.update",
    "params": [
        7,
        {
            "name": "Work Backup",
            "direction": "PUSH",
            "transport": "SSH",
            "ssh_credentials": [12],
            "source_datasets", ["data/work"],
            "target_dataset": "repl/work",
            "recursive": true,
            "periodic_snapshot_tasks": [5],
            "auto": true,
            "restrict_schedule": {
                "minute": "0",
                "hour": "*/2",
                "dom": "*",
                "month": "*",
                "dow": "1,2,3,4,5",
                "begin": "09:00",
                "end": "18:00"
            },
            "only_matching_schedule": true,
            "retention_policy": "CUSTOM",
            "lifetime_value": 1,
            "lifetime_unit": "WEEK",
        }
    ]
}

replication.config

replication.config.config
-
replication.config.update
Arguments:
{ "type": "object", "properties": { "max_parallel_replication_tasks": { "type": [ "integer", "null" ] } }, "additionalProperties": false, "title": "replication_config_update", "default": {} }

max_parallel_replication_tasks represents a maximum number of parallel replication tasks running.

reporting

reporting.config
-
reporting.get_data
Arguments:
{ "type": "array", "title": "graphs", "items": [ { "type": "object" } ] }
{ "type": "object", "properties": { "unit": { "type": "string", "enum": [ "HOUR", "DAY", "WEEK", "MONTH", "YEAR" ] }, "page": { "type": "integer" }, "start": { "type": "string" }, "end": { "type": "string" }, "aggregate": { "type": "boolean" } }, "additionalProperties": false, "title": "reporting_query", "default": {} }

Get reporting data for given graphs.

List of possible graphs can be retrieved using reporting.graphs call.

For the time period of the graph either unit and page OR start and end should be used, not both.

aggregate will return aggregate available data for each graph (e.g. min, max, mean).

Get graph data of "nfsstat" from the last hour.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "reporting.get_data",
    "params": [
        [{"name": "nfsstat"}],
        {"unit": "HOURLY"},
    ]
}
reporting.graphs
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
reporting.update
Arguments:
{ "type": "object", "properties": { "cpu_in_percentage": { "type": "boolean" }, "graphite": { "type": "string" }, "graphite_separateinstances": { "type": "boolean" }, "graph_age": { "type": "integer" }, "graph_points": { "type": "integer" }, "confirm_rrd_destroy": { "type": "boolean" } }, "additionalProperties": false, "title": "reporting_update", "default": {} }

Configure Reporting Database settings.

If cpu_in_percentage is true, collectd reports CPU usage in percentage instead of "jiffies".

graphite specifies a destination hostname or IP for collectd data sent by the Graphite plugin..

graphite_separateinstances corresponds to collectd SeparateInstances option.

graph_age specifies the maximum age of stored graphs in months. graph_points is the number of points for each hourly, daily, weekly, etc. graph. Changing these requires destroying the current reporting database, so when these fields are changed, an additional confirm_rrd_destroy: true flag must be present.

Update reporting settings

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "reporting.update",
    "params": [{
        "cpu_in_percentage": false,
        "graphite": "",
    }]
}

Recreate reporting database with new settings

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "reporting.update",
    "params": [{
        "graph_age": 12,
        "graph_points": 1200,
        "confirm_rrd_destroy": true,
    }]
}

route

route.ipv4gw_reachable
Arguments:
{ "title": "ipv4_gateway", "type": "string" }

Get the IPv4 gateway and verify if it is reachable by any interface.

Returns: bool: True if the gateway is reachable or otherwise False.

route.system_routes
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Get current/applied network routes.

rsyncd

rsyncd.config
-
rsyncd.update
Arguments:
{ "type": "object", "properties": { "port": { "type": "integer" }, "auxiliary": { "type": "string" } }, "additionalProperties": false, "title": "rsyncd_update", "default": {} }

Update Rsyncd Service Configuration.

auxiliary attribute can be used to pass on any additional parameters from rsyncd.conf(5).

rsyncmod

rsyncmod.create
Arguments:
{ "type": "object", "properties": { "enabled": { "type": "boolean" }, "name": { "type": "string" }, "comment": { "type": "string" }, "path": { "type": "string" }, "mode": { "type": "string", "enum": [ "RO", "RW", "WO" ] }, "maxconn": { "type": "integer" }, "user": { "type": "string" }, "group": { "type": "string" }, "hostsallow": { "type": "array", "items": [ { "type": "string" } ] }, "hostsdeny": { "type": "array", "items": [ { "type": "string" } ] }, "auxiliary": { "type": "string" } }, "additionalProperties": false, "title": "rsyncmod_create", "default": {} }

Create a Rsyncmod module.

path represents the path to a dataset. Path length is limited to 1023 characters maximum as per the limit enforced by FreeBSD. It is possible that we reach this max length recursively while transferring data. In that case, the user must ensure the maximum path will not be too long or modify the recursed path to shorter than the limit.

maxconn is an integer value representing the maximum number of simultaneous connections. Zero represents unlimited.

hostsallow is a list of patterns to match hostname/ip address of a connecting client. If list is empty, all hosts are allowed.

hostsdeny is a list of patterns to match hostname/ip address of a connecting client. If the pattern is matched, access is denied to the client. If no client should be denied, this should be left empty.

auxiliary attribute can be used to pass on any additional parameters from rsyncd.conf(5).

rsyncmod.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete Rsyncmod module of id.

rsyncmod.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
rsyncmod.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "enabled": { "type": "boolean" }, "name": { "type": "string" }, "comment": { "type": "string" }, "path": { "type": "string" }, "mode": { "type": "string", "enum": [ "RO", "RW", "WO" ] }, "maxconn": { "type": "integer" }, "user": { "type": "string" }, "group": { "type": "string" }, "hostsallow": { "type": "array", "items": [ { "type": "string" } ] }, "hostsdeny": { "type": "array", "items": [ { "type": "string" } ] }, "auxiliary": { "type": "string" } }, "additionalProperties": false, "title": "rsyncmod_create", "default": {} }

Update Rsyncmod module of id.

rsynctask

rsynctask.create
Arguments:
{ "type": "object", "properties": { "path": { "type": "string" }, "user": { "type": "string" }, "remotehost": { "type": "string" }, "remoteport": { "type": "integer" }, "mode": { "type": "string", "enum": [ "MODULE", "SSH" ] }, "remotemodule": { "type": "string" }, "remotepath": { "type": "string" }, "validate_rpath": { "type": "boolean" }, "direction": { "type": "string", "enum": [ "PULL", "PUSH" ] }, "desc": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "recursive": { "type": "boolean" }, "times": { "type": "boolean" }, "compress": { "type": "boolean" }, "archive": { "type": "boolean" }, "delete": { "type": "boolean" }, "quiet": { "type": "boolean" }, "preserveperm": { "type": "boolean" }, "preserveattr": { "type": "boolean" }, "delayupdates": { "type": "boolean" }, "extra": { "type": "array", "items": [ { "type": "string" } ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "rsync_task_create", "default": {} }

Create a Rsync Task.

See the comment in Rsyncmod about path length limits.

remotehost is ip address or hostname of the remote system. If username differs on the remote host, "username@remote_host" format should be used.

mode represents different operating mechanisms for Rsync i.e Rsync Module mode / Rsync SSH mode.

remotemodule is the name of remote module, this attribute should be specified when mode is set to MODULE.

remotepath specifies the path on the remote system.

validate_rpath is a boolean which when sets validates the existence of the remote path.

direction specifies if data should be PULLED or PUSHED from the remote system.

compress when set reduces the size of the data which is to be transmitted.

archive when set makes rsync run recursively, preserving symlinks, permissions, modification times, group, and special files.

delete when set deletes files in the destination directory which do not exist in the source directory.

preserveperm when set preserves original file permissions.

Create a Rsync Task which pulls data from a remote system every 5 minutes.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "rsynctask.create",
    "params": [{
        "enabled": true,
        "schedule": {
            "minute": "5",
            "hour": "*",
            "dom": "*",
            "month": "*",
            "dow": "*"
        },
        "desc": "Test rsync task",
        "user": "root",
        "mode": "MODULE",
        "remotehost": "root@192.168.0.10",
        "compress": true,
        "archive": true,
        "direction": "PULL",
        "path": "/mnt/vol1/rsync_dataset",
        "remotemodule": "remote_module1"
    }]
}
rsynctask.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete Rsync Task of id.

rsynctask.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
rsynctask.run
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }

Job to run rsync task of id.

Output is saved to job log excerpt (not syslog).

rsynctask.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "path": { "type": "string" }, "user": { "type": "string" }, "remotehost": { "type": "string" }, "remoteport": { "type": "integer" }, "mode": { "type": "string", "enum": [ "MODULE", "SSH" ] }, "remotemodule": { "type": "string" }, "remotepath": { "type": "string" }, "validate_rpath": { "type": "boolean" }, "direction": { "type": "string", "enum": [ "PULL", "PUSH" ] }, "desc": { "type": "string" }, "schedule": { "type": "object", "properties": { "minute": { "type": "string" }, "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "recursive": { "type": "boolean" }, "times": { "type": "boolean" }, "compress": { "type": "boolean" }, "archive": { "type": "boolean" }, "delete": { "type": "boolean" }, "quiet": { "type": "boolean" }, "preserveperm": { "type": "boolean" }, "preserveattr": { "type": "boolean" }, "delayupdates": { "type": "boolean" }, "extra": { "type": "array", "items": [ { "type": "string" } ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "rsync_task_create", "default": {} }

Update Rsync Task of id.

s3

s3.bindip_choices

Return ip choices for S3 service to use.

s3.config
-
s3.update
Arguments:
{ "type": "object", "properties": { "bindip": { "type": "string" }, "bindport": { "type": "integer" }, "access_key": { "type": "string" }, "secret_key": { "type": "string" }, "browser": { "type": "boolean" }, "storage_path": { "type": "string" }, "certificate": { "type": [ "integer", "null" ] } }, "additionalProperties": false, "title": "s3_update", "default": {} }

Update S3 Service Configuration.

access_key must only contain alphanumeric characters and should be between 5 and 20 characters.

secret_key must only contain alphanumeric characters and should be between 8 and 40 characters.

browser when set, enables the web user interface for the S3 Service.

certificate is a valid certificate id which exists in the system. This is used to enable secure S3 connections.

sensor

sensor.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-

service

service.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query all system services with query-filters and query-options.

service.reload
Arguments:
{ "title": "service", "type": "string" }
{ "type": "object", "properties": { "ha_propagate": { "type": "boolean" } }, "additionalProperties": false, "title": "service-control", "default": {} }

Reload the service specified by service.

service.restart
Arguments:
{ "title": "service", "type": "string" }
{ "type": "object", "properties": { "ha_propagate": { "type": "boolean" } }, "additionalProperties": false, "title": "service-control", "default": {} }

Restart the service specified by service.

service.start
Arguments:
{ "title": "service", "type": "string" }
{ "type": "object", "properties": { "ha_propagate": { "type": "boolean" } }, "additionalProperties": false, "title": "service-control", "default": {} }

Start the service specified by service.

service.started

Test if service specified by service has been started.

service.stop
Arguments:
{ "title": "service", "type": "string" }
{ "type": "object", "properties": { "ha_propagate": { "type": "boolean" } }, "additionalProperties": false, "title": "service-control", "default": {} }

Stop the service specified by service.

service.terminate_process
Arguments:
{ "type": "integer", "title": "pid" }
{ "type": "integer", "title": "timeout", "default": 10 }

Terminate process by pid.

First send TERM signal, then, if was not terminated in timeout seconds, send KILL signal.

Returns true is process has been successfully terminated with TERM and false if we had to use KILL.

service.update
Arguments:
{ "title": "id_or_name", "type": "string" }
{ "type": "object", "properties": { "enable": { "type": "boolean" } }, "additionalProperties": false, "title": "service-update", "default": {} }

Update service entry of id_or_name.

Currently it only accepts enable option which means whether the service should start on boot.

sharing.afp

sharing.afp.create
Arguments:
{ "type": "object", "properties": { "path": { "type": "string" }, "home": { "type": "boolean" }, "name": { "type": "string" }, "comment": { "type": "string" }, "allow": { "type": "array", "items": [ { "type": "null" } ] }, "deny": { "type": "array", "items": [ { "type": "null" } ] }, "ro": { "type": "array", "items": [ { "type": "null" } ] }, "rw": { "type": "array", "items": [ { "type": "null" } ] }, "timemachine": { "type": "boolean" }, "timemachine_quota": { "type": "integer" }, "nodev": { "type": "boolean" }, "nostat": { "type": "boolean" }, "upriv": { "type": "boolean" }, "fperm": { "type": "string" }, "dperm": { "type": "string" }, "umask": { "type": "string" }, "hostsallow": { "type": "array", "items": [ { "type": "null" } ] }, "hostsdeny": { "type": "array", "items": [ { "type": "null" } ] }, "vuid": { "type": [ "string", "null" ] }, "auxparams": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "sharingafp_create", "default": {} }

Create AFP share.

allow, deny, ro, and rw are lists of users and groups. Groups are designated by an @ prefix.

hostsallow and hostsdeny are lists of hosts and/or networks.

sharing.afp.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete AFP share id.

sharing.afp.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
sharing.afp.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "path": { "type": "string" }, "home": { "type": "boolean" }, "name": { "type": "string" }, "comment": { "type": "string" }, "allow": { "type": "array", "items": [ { "type": "null" } ] }, "deny": { "type": "array", "items": [ { "type": "null" } ] }, "ro": { "type": "array", "items": [ { "type": "null" } ] }, "rw": { "type": "array", "items": [ { "type": "null" } ] }, "timemachine": { "type": "boolean" }, "timemachine_quota": { "type": "integer" }, "nodev": { "type": "boolean" }, "nostat": { "type": "boolean" }, "upriv": { "type": "boolean" }, "fperm": { "type": "string" }, "dperm": { "type": "string" }, "umask": { "type": "string" }, "hostsallow": { "type": "array", "items": [ { "type": "null" } ] }, "hostsdeny": { "type": "array", "items": [ { "type": "null" } ] }, "vuid": { "type": [ "string", "null" ] }, "auxparams": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "sharingafp_create", "default": {} }

Update AFP share id.

sharing.nfs

sharing.nfs.create
Arguments:
{ "type": "object", "properties": { "paths": { "type": "array", "items": [ { "type": "string" } ] }, "comment": { "type": "string" }, "networks": { "type": "array", "items": [ { "type": "string" } ] }, "hosts": { "type": "array", "items": [ { "type": "string" } ] }, "alldirs": { "type": "boolean" }, "ro": { "type": "boolean" }, "quiet": { "type": "boolean" }, "maproot_user": { "type": [ "string", "null" ] }, "maproot_group": { "type": [ "string", "null" ] }, "mapall_user": { "type": [ "string", "null" ] }, "mapall_group": { "type": [ "string", "null" ] }, "security": { "type": "array", "items": [ { "type": "string" } ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "sharingnfs_create", "default": {} }

Create a NFS Share.

paths is a list of valid paths which are configured to be shared on this share.

networks is a list of authorized networks that are allowed to access the share having format "network/mask" CIDR notation. If empty, all networks are allowed.

hosts is a list of IP's/hostnames which are allowed to access the share. If empty, all IP's/hostnames are allowed.

alldirs is a boolean value which when set indicates that the client can mount any subdirectories of the selected pool or dataset.

sharing.nfs.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete NFS Share of id.

sharing.nfs.human_identifier
-
sharing.nfs.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
sharing.nfs.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "paths": { "type": "array", "items": [ { "type": "string" } ] }, "comment": { "type": "string" }, "networks": { "type": "array", "items": [ { "type": "string" } ] }, "hosts": { "type": "array", "items": [ { "type": "string" } ] }, "alldirs": { "type": "boolean" }, "ro": { "type": "boolean" }, "quiet": { "type": "boolean" }, "maproot_user": { "type": [ "string", "null" ] }, "maproot_group": { "type": [ "string", "null" ] }, "mapall_user": { "type": [ "string", "null" ] }, "mapall_group": { "type": [ "string", "null" ] }, "security": { "type": "array", "items": [ { "type": "string" } ] }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "sharingnfs_create", "default": {} }

Update NFS Share of id.

sharing.smb

sharing.smb.create
Arguments:
{ "type": "object", "properties": { "purpose": { "type": "string", "enum": [ "NO_PRESET", "DEFAULT_SHARE", "ENHANCED_TIMEMACHINE", "MULTI_PROTOCOL_AFP", "MULTI_PROTOCOL_NFS", "PRIVATE_DATASETS", "WORM_DROPBOX" ] }, "path": { "type": "string" }, "path_suffix": { "type": "string" }, "home": { "type": "boolean" }, "name": { "type": "string" }, "comment": { "type": "string" }, "ro": { "type": "boolean" }, "browsable": { "type": "boolean" }, "timemachine": { "type": "boolean" }, "recyclebin": { "type": "boolean" }, "guestok": { "type": "boolean" }, "abe": { "type": "boolean" }, "hostsallow": { "type": "array", "items": [ { "type": "null" } ] }, "hostsdeny": { "type": "array", "items": [ { "type": "null" } ] }, "aapl_name_mangling": { "type": "boolean" }, "acl": { "type": "boolean" }, "durablehandle": { "type": "boolean" }, "shadowcopy": { "type": "boolean" }, "streams": { "type": "boolean" }, "fsrvp": { "type": "boolean" }, "auxsmbconf": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "sharingsmb_create", "default": {} }

Create a SMB Share.

purpose applies common configuration presets depending on intended purpose.

timemachine when set, enables Time Machine backups for this share.

ro when enabled, prohibits write access to the share.

guestok when enabled, allows access to this share without a password.

hostsallow is a list of hostnames / IP addresses which have access to this share.

hostsdeny is a list of hostnames / IP addresses which are not allowed access to this share. If a handful of hostnames are to be only allowed access, hostsdeny can be passed "ALL" which means that it will deny access to ALL hostnames except for the ones which have been listed in hostsallow.

acl enables support for storing the SMB Security Descriptor as a Filesystem ACL.

streams enables support for storing alternate datastreams as filesystem extended attributes.

fsrvp enables support for the filesystem remote VSS protocol. This allows clients to create ZFS snapshots through RPC.

shadowcopy enables support for the volume shadow copy service.

auxsmbconf is a string of additional smb4.conf parameters not covered by the system's API.

sharing.smb.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete SMB Share of id. This will forcibly disconnect SMB clients that are accessing the share.

sharing.smb.presets

Retrieve pre-defined configuration sets for specific use-cases. These parameter combinations are often non-obvious, but beneficial in these scenarios.

sharing.smb.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
sharing.smb.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "purpose": { "type": "string", "enum": [ "NO_PRESET", "DEFAULT_SHARE", "ENHANCED_TIMEMACHINE", "MULTI_PROTOCOL_AFP", "MULTI_PROTOCOL_NFS", "PRIVATE_DATASETS", "WORM_DROPBOX" ] }, "path": { "type": "string" }, "path_suffix": { "type": "string" }, "home": { "type": "boolean" }, "name": { "type": "string" }, "comment": { "type": "string" }, "ro": { "type": "boolean" }, "browsable": { "type": "boolean" }, "timemachine": { "type": "boolean" }, "recyclebin": { "type": "boolean" }, "guestok": { "type": "boolean" }, "abe": { "type": "boolean" }, "hostsallow": { "type": "array", "items": [ { "type": "null" } ] }, "hostsdeny": { "type": "array", "items": [ { "type": "null" } ] }, "aapl_name_mangling": { "type": "boolean" }, "acl": { "type": "boolean" }, "durablehandle": { "type": "boolean" }, "shadowcopy": { "type": "boolean" }, "streams": { "type": "boolean" }, "fsrvp": { "type": "boolean" }, "auxsmbconf": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "sharingsmb_create", "default": {} }

Update SMB Share of id.

sharing.webdav

sharing.webdav.create
Arguments:
{ "type": "object", "properties": { "perm": { "type": "boolean" }, "ro": { "type": "boolean" }, "comment": { "type": "string" }, "name": { "type": "string" }, "path": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "webdav_share_create", "default": {} }

Create a Webdav Share.

ro when enabled prohibits users from writing to this share.

perm when enabled automatically recursively changes the ownership of this share to webdav ( user and group both ).

sharing.webdav.delete
Arguments:
{ "type": "integer", "title": "id" }

Update Webdav Share of id.

sharing.webdav.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
sharing.webdav.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "perm": { "type": "boolean" }, "ro": { "type": "boolean" }, "comment": { "type": "string" }, "name": { "type": "string" }, "path": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "webdav_share_create", "default": {} }

Update Webdav Share of id.

smart

smart.config
-
smart.update
Arguments:
{ "type": "object", "properties": { "interval": { "type": "integer" }, "powermode": { "type": "string", "enum": [ "NEVER", "SLEEP", "STANDBY", "IDLE" ] }, "difference": { "type": "integer" }, "informational": { "type": "integer" }, "critical": { "type": "integer" } }, "additionalProperties": false, "title": "smart_update", "default": {} }

Update SMART Service Configuration.

interval is an integer value in minutes which defines how often smartd activates to check if any tests are configured to run.

critical, informational and difference are integer values on which alerts for SMART are configured if the disks temperature crosses the assigned threshold for each respective attribute. They default to 0 which indicates they are disabled.

smart.test

smart.test.create
Arguments:
{ "type": "object", "properties": { "schedule": { "type": "object", "properties": { "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "desc": { "type": "string" }, "all_disks": { "type": "boolean" }, "disks": { "type": "array", "items": [ { "type": "string" } ] }, "type": { "type": "string", "enum": [ "LONG", "SHORT", "CONVEYANCE", "OFFLINE" ] } }, "additionalProperties": false, "title": "smart_task_create", "default": {} }

Create a SMART Test Task.

disks is a list of valid disks which should be monitored in this task.

type is specified to represent the type of SMART test to be executed.

all_disks when enabled sets the task to cover all disks in which case disks is not required.

Create a SMART Test Task which executes after every 30 minutes.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "smart.test.create",
    "params": [{
        "schedule": {
            "minute": "30",
            "hour": "*",
            "dom": "*",
            "month": "*",
            "dow": "*"
        },
        "all_disks": true,
        "type": "OFFLINE",
        "disks": []
    }]
}
smart.test.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete SMART Test Task of id.

smart.test.disk_choices
Arguments:
{ "type": "boolean", "title": "full_disk", "default": false }

Returns disk choices for S.M.A.R.T. test.

full_disk will return full disk objects instead of just names.

smart.test.manual_test
Arguments:
{ "type": "array", "title": "disks", "items": [ { "type": "object" } ] }

Run manual SMART tests for disks.

type indicates what type of SMART test will be ran and must be specified.

smart.test.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
smart.test.results
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Get disk(s) S.M.A.R.T. test(s) results.

Get all disks tests results

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "smart.test.results",
    "params": []
}

returns

:::javascript

[
  # ATA disk
  {
    "disk": "ada0",
    "tests": [
      {
        "num": 1,
        "description": "Short offline",
        "status": "SUCCESS",
        "status_verbose": "Completed without error",
        "remaining": 0.0,
        "lifetime": 16590,
        "lba_of_first_error": None,
      }
    ]
  },
  # SCSI disk
  {
    "disk": "ada1",
    "tests": [
      {
        "num": 1,
        "description": "Background long",
        "status": "FAILED",
        "status_verbose": "Completed, segment failed",
        "segment_number": None,
        "lifetime": 3943,
        "lba_of_first_error": None,
      }
    ]
  },
]

Get specific disk test results

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "smart.test.results",
    "params": [
      [["disk", "=", "ada0"]],
      {"get": true}
    ]
}

returns

:::javascript

{
  "disk": "ada0",
  "tests": [
    {
      "num": 1,
      "description": "Short offline",
      "status": "SUCCESS",
      "status_verbose": "Completed without error",
      "remaining": 0.0,
      "lifetime": 16590,
      "lba_of_first_error": None,
    }
  ]
}
smart.test.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "schedule": { "type": "object", "properties": { "hour": { "type": "string" }, "dom": { "type": "string" }, "month": { "type": "string" }, "dow": { "type": "string" } }, "additionalProperties": false }, "desc": { "type": "string" }, "all_disks": { "type": "boolean" }, "disks": { "type": "array", "items": [ { "type": "string" } ] }, "type": { "type": "string", "enum": [ "LONG", "SHORT", "CONVEYANCE", "OFFLINE" ] } }, "additionalProperties": false, "title": "smart_task_create", "default": {} }

Update SMART Test Task of id.

smb

smb.bindip_choices

List of valid choices for IP addresses to which to bind the SMB service. Addresses assigned by DHCP are excluded from the results.

smb.config
-
smb.domain_choices

List of domains visible to winbindd. Returns empty list if winbindd is stopped.

smb.get_remote_acl
Arguments:
{ "type": "object", "properties": { "server": { "type": "string" }, "share": { "type": "string" }, "path": { "type": "string" }, "username": { "type": "string" }, "password": { "type": "string" }, "options": { "type": "object", "properties": { "use_kerberos": { "type": "boolean" }, "output_format": { "type": "string", "enum": [ "SMB", "LOCAL" ] } }, "additionalProperties": false } }, "additionalProperties": false, "title": "get_remote_acl", "default": {} }

Retrieves an ACL from a remote SMB server.

server IP Address or hostname of the remote server

share Share name

path path on the remote SMB server. Use "" to separate path components

username username to use for authentication

password password to use for authentication

use_kerberos use credentials to get a kerberos ticket for authentication. AD only.

output_format format for resulting ACL data. Choices are either 'SMB', which will present the information as a Windows SD or 'LOCAL', which formats the ACL information according local filesystem of the TrueNAS server.

smb.status
Arguments:
{ "title": "info_level", "default": "ALL", "type": "string", "enum": [ "AUTH_LOG", "ALL", "SESSIONS", "SHARES", "LOCKS", "BYTERANGE", "NOTIFICATIONS" ] }
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
{ "type": "object", "properties": { "verbose": { "type": "boolean" }, "fast": { "type": "boolean" }, "restrict_user": { "type": "string" } }, "additionalProperties": false, "title": "status_options", "default": {} }

Returns SMB server status (sessions, open files, locks, notifications).

info_level type of information requests. Defaults to ALL.

status_options additional options to filter query results. Supported values are as follows: verbose gives more verbose status output fast causes smbstatus to not check if the status data is valid by checking if the processes that the status data refer to all still exist. This speeds up execution on busy systems and clusters but might display stale data of processes that died without cleaning up properly. restrict_user specifies the limits results to the specified user.

smb.unixcharset_choices
-
smb.update
Arguments:
{ "type": "object", "properties": { "netbiosname": { "type": "string" }, "netbiosname_b": { "type": "string" }, "netbiosalias": { "type": "array", "items": [ { "type": "string" } ] }, "workgroup": { "type": "string" }, "description": { "type": "string" }, "enable_smb1": { "type": "boolean" }, "unixcharset": { "type": "string" }, "loglevel": { "type": "string", "enum": [ "NONE", "MINIMUM", "NORMAL", "FULL", "DEBUG" ] }, "syslog": { "#34;type": "boolean" }, "aapl_extensions": { "type": "boolean" }, "localmaster": { "type": "boolean" }, "guest": { "type": "string" }, "admin_group": { "type": [ "string", "null" ] }, "filemask": { "type": "string" }, "dirmask": { "type": "string" }, "ntlmv1_auth": { "type": "boolean" }, "bindip": { "type": "array", "items": [ { "type": "string" } ] }, "smb_options": { "type": "string" } }, "additionalProperties": false, "title": "smb_update", "default": {} }

Update SMB Service Configuration.

netbiosname defaults to the original hostname of the system.

workgroup and netbiosname should have different values.

enable_smb1 allows legacy SMB clients to connect to the server when enabled.

localmaster when set, determines if the system participates in a browser election.

domain_logons is used to provide netlogin service for older Windows clients if enabled.

guest attribute is specified to select the account to be used for guest access. It defaults to "nobody".

nullpw when enabled allows the users to authorize access without a password.

hostlookup when enabled, allows using hostnames rather then IP addresses in "hostsallow"/"hostsdeny" fields of SMB Shares.

smb.sharesec

smb.sharesec.create
Arguments:
{ "type": "object", "properties": { "share_name": { "type": "string" }, "share_acl": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "smbsharesec_create", "default": {} }

Update the ACL on a given SMB share. Will write changes to both /var/db/system/samba4/share_info.tdb and the configuration file. Since an SMB share will always have an ACL present, there is little distinction between the create and update methods apart from arguments.

share_name - name of SMB share.

share_acl a list of ACL entries (dictionaries) with the following keys:

ae_who_sid who the ACL entry applies to expressed as a Windows SID

ae_who_name who the ACL entry applies to expressed as a name. ae_who_name is a dictionary containing the following keys: domain that the user is a member of, name username in the domain. The domain for local users is the netbios name of the FreeNAS server.

ae_perm string representation of the permissions granted to the user or group. FULL grants read, write, execute, delete, write acl, and change owner. CHANGE grants read, write, execute, and delete. READ grants read and execute.

ae_type can be ALLOWED or DENIED.

smb.sharesec.delete
Arguments:
{ "title": "id_or_name", "type": "string" }

Replace share ACL for the specified SMB share with the samba default ACL of S-1-1-0/FULL (Everyone - Full Control). In this case, access will be fully determined by the underlying filesystem ACLs and smb4.conf parameters governing access control and permissions. Share can be deleted by name or numerical by numerical index.

smb.sharesec.getacl
Arguments:
{ "title": "share_name", "type": "string" }
{ "type": "object", "properties": { "resolve_sids": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

View the ACL information for share_name. The share ACL is distinct from filesystem ACLs which can be viewed by calling filesystem.getacl. ae_who_name will appear as None if the SMB service is stopped or if winbind is unable to resolve the SID to a name.

If the option resolve_sids is set to False then the returned ACL will not contain names.

smb.sharesec.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Use query-filters to search the SMB share ACLs present on server.

smb.sharesec.synchronize_acls

Synchronize the share ACL stored in the config database with Samba's running configuration as reflected in the share_info.tdb file.

The only situation in which the configuration stored in the database will overwrite samba's running configuration is if share_info.tdb is empty. Samba fakes a single S-1-1-0:ALLOW/0x0/FULL entry in the absence of an entry for a share in share_info.tdb.

smb.sharesec.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "share_acl": { "type": "array", "items": [ { "type": "object" } ] } }, "additionalProperties": false, "title": "smbsharesec_update", "default": {} }

Update the ACL on the share specified by the numerical index id. Will write changes to both /var/db/system/samba4/share_info.tdb and the configuration file.

snmp

snmp.config
-
snmp.update
Arguments:
{ "type": "object", "properties": { "location": { "type": "string" }, "contact": { "type": "string" }, "traps": { "type": "boolean" }, "v3": { "type": "boolean" }, "community": { "type": "string" }, "v3_username": { "type": "string" }, "v3_authtype": { "type": "string", "enum": [ "", "MD5", "SHA" ] }, "v3_password": { "type": "string" }, "v3_privproto": { "type": [ "string", "null" ], "enum": [ null, "AES", "DES" ] }, "v3_privpassphrase": { "type": "string" }, "loglevel": { "type": "integer" }, "options": { "type": "string" }, "zilstat": { "type": "boolean" }, "iftop": { "type": "boolean" } }, "additionalProperties": false, "title": "snmp_update", "default": {} }

Update SNMP Service Configuration.

v3 when set enables SNMP version 3.

v3_username, v3_authtype, v3_password, v3_privproto and v3_privpassphrase are only used when v3 is enabled.

ssh

ssh.bindiface_choices

Available choices for the bindiface attribute of SSH service.

ssh.config
-
ssh.update
Arguments:
{ "type": "object", "properties": { "bindiface": { "type": "array", "items": [ { "type": "string" } ] }, "tcpport": { "type": "integer" }, "rootlogin": { "type": "boolean" }, "passwordauth": { "type": "boolean" }, "kerberosauth": { "type": "boolean" }, "tcpfwd": { "type": "boolean" }, "compression": { "type": "boolean" }, "sftp_log_level": { "type": "string", "enum": [ "", "QUIET", "FATAL", "ERROR", "INFO", "VERBOSE", "DEBUG", "DEBUG2", "DEBUG3" ] }, "sftp_log_facility": { "type": "string", "enum": [ "", "DAEMON", "USER", "AUTH", "LOCAL0", "LOCAL1", "LOCAL2", "LOCAL3", "LOCAL4", "LOCAL5", "LOCAL6", "LOCAL7" ] }, "weak_ciphers": { "type": "array", "items": [ { "type": "string" } ] }, "options": { "type": "string" } }, "additionalProperties": false, "title": "ssh_update", "default": {} }

Update settings of SSH daemon service.

If bindiface is empty it will listen for all available addresses.

Make sshd listen only to igb0 interface.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "ssh.update",
    "params": [{
        "bindiface": ["igb0"]
    }]
}

staticroute

staticroute.create
Arguments:
{ "type": "object", "properties": { "destination": { "type": "string" }, "gateway": { "type": "string" }, "description": { "type": "string" } }, "additionalProperties": false, "title": "staticroute_create", "default": {} }

Create a Static Route.

Address families of gateway and destination should match when creating a static route.

description is an optional attribute for any notes regarding the static route.

staticroute.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete Static Route of id.

staticroute.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
staticroute.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "destination": { "type": "string" }, "gateway": { "type": "string" }, "description": { "type": "string" } }, "additionalProperties": false, "title": "staticroute_create", "default": {} }

Update Static Route of id.

stats

stats.get_data
Arguments:
{ "type": "array", "title": "stats_list", "items": [ { "type": "object" } ] }
{ "type": "object", "properties": { "step": { "type": "integer" }, "start": { "type": "string" }, "end": { "type": "string" } }, "additionalProperties": false, "title": "stats-filter", "default": {} }

Get data points from rrd files.

stats.get_dataset_info
Arguments:
{ "title": "source", "type": "string" }
{ "title": "type", "type": "string" }

Returns info about a given dataset from some source.

stats.get_sources

Returns an object with all available sources tried with metric datasets.

support

support.attach_ticket
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "object", "properties": { "ticket": { "type": "integer" }, "filename": { "type": "string" }, "username": { "type": "string" }, "password": { "type": "string" } }, "additionalProperties": false, "title": "attach_ticket", "default": {} }

Method to attach a file to a existing ticket.

support.config
-
support.fetch_categories
Arguments:
{ "title": "username", "type": "string" }
{ "title": "password", "type": "string" }

Fetch all the categories available for username using password. Returns a dict with the category name as a key and id as value.

support.fields

Returns list of pairs of field names and field titles for Proactive Support.

support.is_available

Returns whether Proactive Support is available for this product type and current license.

support.is_available_and_enabled

Returns whether Proactive Support is available and enabled.

support.new_ticket
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "title": { "type": "string" }, "body": { "type": "string" }, "category": { "type": "string" }, "attach_debug": { "type": "boolean" }, "username": { "type": "string" }, "password": { "type": "string" }, "type": { "type": "string", "enum": [ "BUG", "FEATURE" ] }, "criticality": { "type": "string" }, "environment": { "type": "string" }, "phone": { "type": "string" }, "name": { "type": "string" }, "email": { "type": "string" }, "cc": { "type": "array", "items": [ { "type": "string" } ] } }, "additionalProperties": false, "title": "new_ticket", "default": {} }

Creates a new ticket for support. This is done using the support proxy API. For FreeNAS it will be created on Redmine and for TrueNAS on SupportSuite.

For FreeNAS criticality, environment, phone, name and email attributes are not required. For TrueNAS username, password and type attributes are not required.

support.update
Arguments:
{ "type": "object", "properties": { "enabled": { "type": [ "boolean", "null" ] }, "name": { "type": "string" }, "title": { "type": "string" }, "email": { "type": "string" }, "phone": { "type": "string" }, "secondary_name": { "type": "string" }, "secondary_title": { "type": "string" }, "secondary_email": { "type": "string" }, "secondary_phone": { "type": "string" } }, "additionalProperties": false, "title": "support_update", "default": {} }

Update Proactive Support settings.

system

system.boot_id

Returns an unique boot identifier.

It is supposed to be unique every system boot.

system.build_time

Retrieve build time of the system.

system.debug
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be downloaded from this endpoint. Please refer to the Jobs section to download a file.

Job to stream debug file.

This method is meant to be used in conjuntion with core.download to get the debug downloaded via HTTP.

system.environment

Return environment in which product is running. Possible values: - DEFAULT - EC2

system.feature_enabled
Arguments:
{ "title": "feature", "type": "string", "enum": [ "DEDUP", "FIBRECHANNEL", "JAILS", "VM" ] }

Returns whether the feature is enabled or not

system.info

Returns basic system information.

system.is_freenas

FreeNAS is now TrueNAS CORE.

DEPRECATED: Use system.product_type

system.license_update
Arguments:
{ "title": "license", "type": "string" }

Update license file.

system.product_name

Returns name of the product we are using.

system.product_type

Returns the type of the product.

CORE - TrueNAS Core, community version ENTERPRISE - TrueNAS Enterprise, appliance version SCALE - TrueNAS SCALE

system.ready

Returns whether the system completed boot and is ready to use

system.reboot
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "delay": { "type": "integer" } }, "additionalProperties": false, "title": "system-reboot", "default": {} }

Reboots the operating system.

Emits an "added" event of name "system" and id "reboot".

system.shutdown
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "delay": { "type": "integer" } }, "additionalProperties": false, "title": "system-shutdown", "default": {} }

Shuts down the operating system.

An "added" event of name "system" and id "shutdown" is emitted when shutdown is initiated.

system.state

Returns system state: "BOOTING" - System is booting "READY" - System completed boot and is ready to use "SHUTTING_DOWN" - System is shutting down

system.version

Returns software version of the system.

system.advanced

system.advanced.config
-
system.advanced.sed_global_password

Returns configured global SED password.

system.advanced.serial_port_choices

Get available choices for serialport.

system.advanced.update
Arguments:
{ "type": "object", "properties": { "advancedmode": { "type": "boolean" }, "autotune": { "type": "boolean" }, "boot_scrub": { "type": "integer" }, "consolemenu": { "type": "boolean" }, "consolemsg": { "type": "boolean" }, "debugkernel": { "type": "boolean" }, "fqdn_syslog": { "type": "boolean" }, "motd": { "type": "string" }, "powerdaemon": { "type": "boolean" }, "serialconsole": { "type": "boolean" }, "serialport": { "type": "string" }, "serialspeed": { "type": "string", "enum": [ "9600", "19200", "38400", "57600", "115200" ] }, "swapondrive": { "type": "integer" }, "overprovision": { "type": [ "integer", "null" ] }, "traceback": { "type": "boolean" }, "uploadcrash": { "type": "boolean" }, "anonstats": { "type": "boolean" }, "sed_user": { "type": "string", "enum": [ "USER", "MASTER" ] }, "sed_passwd": { "type": "string" }, "sysloglevel": { "type": "string", "enum": [ "F_EMERG", "F_ALERT", "F_CRIT", "F_ERR", "F_WARNING", "F_NOTICE", "F_INFO", "F_DEBUG", "F_IS_DEBUG" ] }, "syslogserver": { "type": "string" }, "syslog_transport": { "type": "string", "enum": [ "UDP", "TCP", "TLS" ] }, "syslog_tls_certificate": { "type": [ "integer", "null" ] } }, "additionalProperties": false, "title": "system_advanced_update", "default": {} }

Update System Advanced Service Configuration.

consolemenu should be disabled if the menu at console is not desired. It will default to standard login in the console if disabled.

autotune when enabled executes autotune script which attempts to optimize the system based on the installed hardware.

When syslogserver is defined, logs of sysloglevel or above are sent.

system.general

system.general.config
-
system.general.country_choices

Returns country choices.

system.general.kbdmap_choices

Returns kbdmap choices.

system.general.language_choices

Returns language choices.

system.general.local_url

Returns configured local url in the format of protocol://host:port

system.general.timezone_choices

Returns time zone choices.

system.general.ui_address_choices

Returns UI ipv4 address choices.

system.general.ui_certificate_choices

Return choices of certificates which can be used for ui_certificate.

system.general.ui_httpsprotocols_choices

Returns available HTTPS protocols.

system.general.ui_restart
Arguments:
{ "type": "integer", "title": "delay", "default": 3 }

Restart HTTP server to use latest UI settings.

HTTP server will be restarted after delay seconds.

system.general.ui_v6address_choices

Returns UI ipv6 address choices.

system.general.update
Arguments:
{ "type": "object", "properties": { "ui_certificate": { "type": [ "integer", "null" ] }, "ui_httpsport": { "type": "integer" }, "ui_httpsredirect": { "type": "boolean" }, "ui_httpsprotocols": { "type": "array", "items": [ { "type": "string" } ] }, "ui_port": { "type": "integer" }, "ui_address": { "type": "array", "items": [ { "type": "string" } ] }, "ui_v6address": { "type": "array", "items": [ { "type": "string" } ] }, "kbdmap": { "type": "string" }, "language": { "type": "string" }, "sysloglevel": { "type": "string", "enum": [ "F_EMERG", "F_ALERT", "F_CRIT", "F_ERR", "F_WARNING", "F_NOTICE", "F_INFO", "F_DEBUG", "F_IS_DEBUG" ] }, "syslogserver": { "type": "string" }, "timezone": { "type": "string" }, "crash_reporting": { "type": [ "boolean", "null" ] }, "usage_collection": { "type": [ "boolean", "null" ] } }, "additionalProperties": false, "title": "general_settings", "default": {} }

Update System General Service Configuration.

ui_certificate is used to enable HTTPS access to the system. If ui_certificate is not configured on boot, it is automatically created by the system.

ui_httpsredirect when set, makes sure that all HTTP requests are converted to HTTPS requests to better enhance security.

ui_address and ui_v6address are a list of valid ipv4/ipv6 addresses respectively which the system will listen on.

syslogserver and sysloglevel are deprecated fields as of 11.3 and will be permanently moved to system.advanced.update for 12.0

system.ntpserver

system.ntpserver.create
Arguments:
{ "type": "object", "properties": { "address": { "type": "string" }, "burst": { "type": "boolean" }, "iburst": { "type": "boolean" }, "prefer": { "type": "boolean" }, "minpoll": { "type": "integer" }, "maxpoll": { "type": "integer" }, "force": { "type": "boolean" } }, "additionalProperties": false, "title": "ntp_create", "default": {} }

Add an NTP Server.

address specifies the hostname/IP address of the NTP server.

burst when enabled makes sure that if server is reachable, sends a burst of eight packets instead of one. This is designed to improve timekeeping quality with the server command.

iburst when enabled speeds up the initial synchronization, taking seconds rather than minutes.

prefer marks the specified server as preferred. When all other things are equal, this host is chosen for synchronization acquisition with the server command. It is recommended that they be used for servers with time monitoring hardware.

minpoll is minimum polling time in seconds. It must be a power of 2 and less than maxpoll.

maxpoll is maximum polling time in seconds. It must be a power of 2 and greater than minpoll.

force when enabled forces the addition of NTP server even if it is currently unreachable.

system.ntpserver.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete NTP server of id.

system.ntpserver.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
system.ntpserver.test_ntp_server
-
system.ntpserver.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "address": { "type": "string" }, "burst": { "type": "boolean" }, "iburst": { "type": "boolean" }, "prefer": { "type": "boolean" }, "minpoll": { "type": "integer" }, "maxpoll": { "type": "integer" }, "force": { "type": "boolean" } }, "additionalProperties": false, "title": "ntp_create", "default": {} }

Update NTP server of id.

systemdataset

systemdataset.config
-
systemdataset.update
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "pool": { "type": [ "string", "null" ] }, "pool_exclude": { "type": [ "string", "null" ] }, "syslog": { "type": "boolean" } }, "additionalProperties": false, "title": "sysdataset_update", "default": {} }

Update System Dataset Service Configuration.

pool is the name of a valid pool configured in the system which will be used to host the system dataset.

pool_exclude can be specified to make sure that we don't place the system dataset on that pool if pool is not provided.

tftp

tftp.config
-
tftp.update
Arguments:
{ "type": "object", "properties": { "newfiles": { "type": "boolean" }, "directory": { "type": "string" }, "host": { "type": "string" }, "port": { "type": "integer" }, "options": { "type": "string" }, "umask": { "type": "string" }, "username": { "type": "string" } }, "additionalProperties": false, "title": "tftp_update", "default": {} }

Update TFTP Service Configuration.

newfiles when set enables network devices to send files to the system.

username sets the user account which will be used to access directory. It should be ensured username has access to directory.

truecommand

truecommand.config
-
truecommand.connected

Returns information which shows if system has an authenticated api key and has initiated a VPN connection with TrueCommand.

truecommand.update
Arguments:
{ "type": "object", "properties": { "enabled": { "type": "boolean" }, "api_key": { "type": [ "string", "null" ] } }, "additionalProperties": false, "title": "truecommand_update", "default": {} }

Update Truecommand service settings.

api_key is a valid API key generated by iX Portal.

truenas

truenas.accept_eula

Accept TrueNAS EULA.

truenas.get_chassis_hardware

Returns what type of hardware this is, detected from dmidecode.

TRUENAS-X10-HA-D TRUENAS-X10-S TRUENAS-X20-HA-D TRUENAS-X20-S TRUENAS-M40-HA TRUENAS-M40-S TRUENAS-M50-HA TRUENAS-M50-S TRUENAS-M60-HA TRUENAS-M60-S TRUENAS-Z20-S TRUENAS-Z20-HA-D TRUENAS-Z30-HA-D TRUENAS-Z30-S TRUENAS-Z35-HA-D TRUENAS-Z35-S TRUENAS-Z50-HA-D TRUENAS-Z50-S

Nothing in dmidecode but a M, X or Z class machine: (Note this means production didn't burn the hardware model into SMBIOS. We can detect this case by looking at the motherboard) TRUENAS-M TRUENAS-X TRUENAS-Z

Detected by the motherboard model: TRUENAS-SBB

Pretty much anything else with a SM X8 board: (X8DTH was popular but there are a few other boards out there) TRUENAS-SM

Really NFI about hardware at this point. TrueNAS on a Dell? TRUENAS-UNKNOWN

truenas.get_customer_information

Returns stored customer information.

truenas.get_eula

Returns the TrueNAS End-User License Agreement (EULA).

truenas.is_eula_accepted

Returns whether the EULA is accepted or not.

truenas.is_production

Returns if system is marked as production.

truenas.set_production
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "boolean", "title": "production" }
{ "type": "boolean", "title": "attach_debug", "default": false }

Sets system production state and optionally sends initial debug.

truenas.update_customer_information
Arguments:
{ "type": "object", "properties": { "company": { "type": "string" }, "administrative_user": { "type": "object", "properties": { "first_name": { "type": "string" }, "last_name": { "type": "string" }, "title": { "type": "string" }, "office_phone": { "type": "string" }, "mobile_phone": { "type": "string" }, "primary_email": { "type": "string" }, "secondary_email": { "type": "string" }, "address": { "type": "string" }, "city": { "type": "string" }, "state": { "type": "string" }, "zip": { "type": "string" }, "country": { "type": "string" } }, "additionalProperties": false }, "technical_user": { "type": "object", "properties": { "first_name": { "type": "string" }, "last_name": { "type": "string" }, "title": { "type": "string" }, "office_phone": { "type": "string" }, "mobile_phone": { "type": "string" }, "primary_email": { "type": "string" }, "secondary_email": { "type": "string" }, "address": { "type": "string" }, "city": { "type": "string" }, "state": { "type": "string" }, "zip": { "type": "string" }, "country": { "type": "string" } }, "additionalProperties": false }, "reseller": { "type": "object", "properties": { "company": { "type": "string" }, "first_name": { "type": "string" }, "last_name": { "type": "string" }, "title": { "type": "string" }, "office_phone": { "type": "string" }, "mobile_phone": { "type": "string" } }, "additionalProperties": false }, "physical_location": { "type": "object", "properties": { "address": { "type": "string" }, "city": { "type": "string" }, "state": { "type": "string" }, "zip": { "type": "string" }, "country": { "type": "string" }, "contact_name": { "type": "string" }, "contact_phone_number": { "type": "string" }, "contact_email": { "type": "string" } }, "additionalProperties": false }, "primary_use_case": { "type": "string" }, "other_primary_use_case": { "type": "string" } }, "additionalProperties": false, "title": "customer_information_update", "default": {} }

Updates customer information.

tunable

tunable.create
Arguments:
{ "type": "object", "properties": { "var": { "type": "string" }, "value": { "type": "string" }, "type": { "type": "string", "enum": [ "SYSCTL", "LOADER", "RC" ] }, "comment": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "tunable_create", "default": {} }

Create a Tunable.

var represents name of the sysctl/loader/rc variable.

type for SCALE should be one of the following: 1) SYSCTL - Configure var for sysctl(8)

type for CORE/ENTERPRISE should be one of the following: 1) LOADER - Configure var for loader(8) 2) RC - Configure var for rc(8) 3) SYSCTL - Configure var for sysctl(8)

tunable.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete Tunable of id.

tunable.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
tunable.tunable_type_choices

Retrieve tunable type choices supported in the system

tunable.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "var": { "type": "string" }, "value": { "type": "string" }, "type": { "type": "string", "enum": [ "SYSCTL", "LOADER", "RC" ] }, "comment": { "type": "string" }, "enabled": { "type": "boolean" } }, "additionalProperties": false, "title": "tunable_create", "default": {} }

Update Tunable of id.

unscheduledrebootalert

update

update.check_available
Arguments:
{ "type": "object", "properties": { "train": { "type": "string" } }, "additionalProperties": false, "title": "update-check-available", "default": {} }

Checks if there is an update available from update server.

status: - REBOOT_REQUIRED: an update has already been applied - AVAILABLE: an update is available - UNAVAILABLE: no update available - HA_UNAVAILABLE: HA is non-functional

Check available update using default train:

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "update.check_available"
}
update.download
Job This endpoint is a Job. Please refer to the Jobs section for details.

Download updates using selected train.

update.file
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "object", "properties": { "destination": { "type": [ "string", "null" ] } }, "additionalProperties": false, "title": "updatefile", "default": {} }

Updates the system using the uploaded .tar file.

Use null destination to create a temporary location.

update.get_auto_download

Returns if update auto-download is enabled.

update.get_pending
Arguments:
{ "title": "path", "default": null, "type": [ "string", "null" ] }

Gets a list of packages already downloaded and ready to be applied. Each entry of the lists consists of type of operation and name of it, e.g.

{ "operation": "upgrade", "name": "baseos-11.0 -> baseos-11.1" }

update.get_trains

Returns available trains dict and the currently configured train as well as the train of currently booted environment.

update.manual
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "title": "path", "type": "string" }

Apply manual update of file path.

update.set_auto_download
Arguments:
{ "type": "boolean", "title": "autocheck" }

Sets if update auto-download is enabled.

update.set_train
Arguments:
{ "title": "train", "type": "string" }

Set an update train to be used by default in updates.

update.update
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "object", "properties": { "train": { "type": "string" }, "reboot": { "type": "boolean" } }, "additionalProperties": false, "title": "update", "default": {} }

Downloads (if not already in cache) and apply an update.

ups

ups.config
-
ups.driver_choices

Returns choices of UPS drivers supported by the system.

ups.port_choices
-
ups.update
Arguments:
{ "type": "object", "properties": { "emailnotify": { "type": "boolean" }, "powerdown": { "type": "boolean" }, "rmonitor": { "type": "boolean" }, "nocommwarntime": { "type": [ "integer", "null" ] }, "remoteport": { "type": "integer" }, "shutdowntimer": { "type": "integer" }, "hostsync": { "type": "integer" }, "description": { "type": "string" }, "driver": { "type": "string" }, "extrausers": { "type": "string" }, "identifier": { "type": "string" }, "mode": { "type": "string", "enum": [ "MASTER", "SLAVE" ] }, "monpwd": { "type": "string" }, "monuser": { "type": "string" }, "options": { "type": "string" }, "optionsupsd": { "type": "string" }, "port": { "type": "string" }, "remotehost": { "type": "string" }, "shutdown": { "type": "string", "enum": [ "LOWBATT", "BATT" ] }, "shutdowncmd": { "type": [ "string", "null" ] }, "subject": { "type": "string" }, "toemail": { "type": "array", "items": [ { "type": "string" } ] } }, "additionalProperties": false, "title": "ups_update", "default": {} }

Update UPS Service Configuration.

emailnotify when enabled, sends out notifications of different UPS events via email.

powerdown when enabled, sets UPS to power off after shutting down the system.

nocommwarntime is a value in seconds which makes UPS Service wait the specified seconds before alerting that the Service cannot reach configured UPS.

shutdowntimer is a value in seconds which tells the Service to wait specified seconds for the UPS before initiating a shutdown. This only applies when shutdown is set to "BATT".

shutdowncmd is the command which is executed to initiate a shutdown. It defaults to "poweroff".

toemail is a list of valid email id's on which notification emails are sent.

user

user.create
Arguments:
{ "type": "object", "properties": { "uid": { "type": "integer" }, "username": { "type": "string" }, "group": { "type": "integer" }, "group_create": { "type": "boolean" }, "home": { "type": "string" }, "home_mode": { "type": "string" }, "shell": { "type": "string" }, "full_name": { "type": "string" }, "email": { "type": [ "string", "null" ] }, "password": { "type": "string" }, "password_disabled": { "type": "boolean" }, "locked": { "type": "boolean" }, "microsoft_account": { "type": "boolean" }, "smb": { "type": "boolean" }, "sudo": { "type": "boolean" }, "sudo_nopasswd": { "type": "boolean" }, "sudo_commands": { "type": "array", "items": [ { "type": "string" } ] }, "sshpubkey": { "type": [ "string", "null" ] }, "groups": { "type": "array", "items": [ { "type": "null" } ] }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "user_create", "default": {} }

Create a new user.

If uid is not provided it is automatically filled with the next one available.

group is required if group_create is false.

password is required if password_disabled is false.

Available choices for shell can be retrieved with user.shell_choices.

attributes is a general-purpose object for storing arbitrary user information.

smb specifies whether the user should be allowed access to SMB shares. User willl also automatically be added to the builtin_users group.

user.delete
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "delete_group": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Delete user id.

The delete_group option deletes the user primary group if it is not being used by any other user.

user.get_next_uid

Get the next available/free uid.

user.get_user_obj
Arguments:
{ "type": "object", "properties": { "username": { "type": "string" }, "uid": { "type": "integer" } }, "additionalProperties": false, "title": "get_user_obj", "default": {} }

Returns dictionary containing information from struct passwd for the user specified by either the username or uid. Bypasses user cache.

user.has_root_password

Return whether the root user has a valid password set.

This is used when the system is installed without a password and must be set on first use/login.

user.pop_attribute
Arguments:
{ "type": "integer", "title": "id" }
{ "title": "key", "type": "string" }

Remove user general purpose attributes dictionary key.

user.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query users with query-filters and query-options. As a performance optimization, only local users will be queried by default.

Users from directory services such as NIS, LDAP, or Active Directory will be included in query results if the option {'extra': {'search_dscache': True}} is specified.

user.set_attribute
Arguments:
{ "type": "integer", "title": "id" }
{ "title": "key", "type": "string" }
{ "anyOf": [ { "type": "string" }, { "type": "integer" }, { "type": "boolean" }, { "type": "object" }, { "type": "array" } ], "title": "value", "nullable": false }

Set user general purpose attributes dictionary key to value.

e.g. Setting key="foo" value="var" will result in {"attributes": {"foo": "bar"}}

user.set_root_password
Arguments:
{ "title": "password", "type": "string" }
{ "type": "object", "properties": { "ec2": { "type": "object", "properties": { "instance_id": { "type": "string" } }, "additionalProperties": false } }, "additionalProperties": false, "title": "options", "default": {} }

Set password for root user if it is not already set.

user.shell_choices
Arguments:
{ "type": [ "integer", "null" ], "title": "user_id", "default": null }

Return the available shell choices to be used in user.create and user.update.

If user_id is provided, shell choices are filtered to ensure the user can access the shell choices provided.

user.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "uid": { "type": "integer" }, "username": { "type": "string" }, "group": { "type": "integer" }, "home": { "type": "string" }, "home_mode": { "type": "string" }, "shell": { "type": "string" }, "full_name": { "type": "string" }, "email": { "type": [ "string", "null" ] }, "password": { "type": "string" }, "password_disabled": { "type": "boolean" }, "locked": { "type": "boolean" }, "microsoft_account": { "type": "boolean" }, "smb": { "type": "boolean" }, "sudo": { "type": "boolean" }, "sudo_nopasswd": { "type": "boolean" }, "sudo_commands": { "type": "array", "items": [ { "type": "string" } ] }, "sshpubkey": { "type": [ "string", "null" ] }, "groups": { "type": "array", "items": [ { "type": "null" } ] }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "user_create", "default": {} }

Update attributes of an existing user.

vm

vm.clone
Arguments:
{ "type": "integer", "title": "id" }
{ "title": "name", "default": null, "type": "string" }

Clone the VM id.

name is an optional parameter for the cloned VM. If not provided it will append the next number available to the VM name.

vm.create
Arguments:
{ "type": "object", "properties": { "name": { "type": "string" }, "description": { "type": "string" }, "vcpus": { "type": "integer" }, "cores": { "type": "integer" }, "threads": { "type": "integer" }, "memory": { "type": "integer" }, "bootloader": { "type": "string", "enum": [ "UEFI", "UEFI_CSM", "GRUB" ] }, "grubconfig": { "type": [ "string", "null" ] }, "devices": { "type": "array", "items": [ { "type": "object" } ] }, "autostart": { "type": "boolean" }, "time": { "type": "string", "enum": [ "LOCAL", "UTC" ] }, "shutdown_timeout": { "type": "integer" } }, "additionalProperties": false, "title": "vm_create", "default": {} }

Create a Virtual Machine (VM).

grubconfig may either be a path for the grub.cfg file or the actual content of the file to be used with GRUB bootloader.

devices is a list of virtualized hardware to add to the newly created Virtual Machine. Failure to attach a device destroys the VM and any resources allocated by the VM devices.

Maximum of 16 guest virtual CPUs are allowed. By default, every virtual CPU is configured as a separate package. Multiple cores can be configured per CPU by specifying cores attributes. vcpus specifies total number of CPU sockets. cores specifies number of cores per socket. threads specifies number of threads per core.

shutdown_timeout indicates the time in seconds the system waits for the VM to cleanly shutdown. During system shutdown, if the VM hasn't exited after a hardware shutdown signal has been sent by the system within shutdown_timeout seconds, system initiates poweroff for the VM to stop it.

vm.delete
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "zvols": { "type": "boolean" }, "force": { "type": "boolean" } }, "additionalProperties": false, "title": "vm_delete", "default": {} }

Delete a VM.

vm.flags

Returns a dictionary with CPU flags for bhyve.

vm.get_attached_iface
Arguments:
{ "type": "integer", "title": "id" }

Get the attached physical interfaces from a given guest.

Returns: list: will return a list with all attached phisycal interfaces or otherwise False.

vm.get_available_memory
Arguments:
{ "type": "boolean", "title": "overcommit", "default": false }

Get the current maximum amount of available memory to be allocated for VMs.

If overcommit is true only the current used memory of running VMs will be accounted for. If false all memory (including unused) of runnings VMs will be accounted for.

This will include memory shrinking ZFS ARC to the minimum.

Memory is of course a very "volatile" resource, values may change abruptly between a second but I deem it good enough to give the user a clue about how much memory is available at the current moment and if a VM should be allowed to be launched.

vm.get_console
Arguments:
{ "type": "integer", "title": "id" }

Get the console device from a given guest.

Returns: str: with the device path or False.

vm.get_vmemory_in_use

The total amount of virtual memory in MB used by guests

Returns a dict with the following information:
    RNP - Running but not provisioned
    PRD - Provisioned but not running
    RPRD - Running and provisioned
vm.get_vnc
Arguments:
{ "type": "integer", "title": "id" }

Get the vnc devices from a given guest.

Returns: list(dict): with all attributes of the vnc device or an empty list.

vm.get_vnc_ipv4

Get all available IPv4 address in the system.

Returns: list: will return a list of available IPv4 address.

vm.get_vnc_web
Arguments:
{ "type": "integer", "title": "id" }
{ "title": "host", "default": "", "type": "string" }

Get the VNC URL from a given VM.

Returns: list: With all URL available.

vm.identify_hypervisor

Identify Hypervisors that might work nested with bhyve.

Returns: bool: True if compatible otherwise False.

vm.poweroff
Arguments:
{ "type": "integer", "title": "id" }
-
vm.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
vm.random_mac

Create a random mac address.

Returns: str: with six groups of two hexadecimal digits

vm.restart
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }

Restart a VM.

vm.start
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "overcommit": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Start a VM.

options.overcommit defaults to false, meaning VMs are not allowed to start if there is not enough available memory to hold all configured VMs. If true, VM starts even if there is not enough memory for all configured VMs.

Error codes:

ENOMEM(12): not enough free memory to run the VM without overcommit
vm.status
Arguments:
{ "type": "integer", "title": "id" }

Get the status of a VM.

Returns a dict: - state, RUNNING or STOPPED - pid, process id if RUNNING

vm.stop
Job This endpoint is a Job. Please refer to the Jobs section for details.
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "force": { "type": "boolean" }, "force_after_timeout": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Stops a VM.

For unresponsive guests who have exceeded the shutdown_timeout defined by the user and have become unresponsive, they required to be powered down using vm.poweroff. vm.stop is only going to send a shutdown signal to the guest and wait the desired shutdown_timeout value before tearing down guest vmemory.

force_after_timeout when supplied, it will initiate poweroff for the VM forcing it to exit if it has not already stopped within the specified shutdown_timeout.

vm.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "name": { "type": "string" }, "description": { "type": "string" }, "vcpus": { "type": "integer" }, "cores": { "type": "integer" }, "threads": { "type": "integer" }, "memory": { "type": "integer" }, "bootloader": { "type": "string", "enum": [ "UEFI", "UEFI_CSM", "GRUB" ] }, "grubconfig": { "type": [ "string", "null" ] }, "devices": { "type": "array", "items": [ { "type": "object" } ] }, "autostart": { "type": "boolean" }, "time": { "type": "string", "enum": [ "LOCAL", "UTC" ] }, "shutdown_timeout": { "type": "integer" } }, "additionalProperties": false, "title": "vm_create", "default": {} }

Update all information of a specific VM.

devices is a list of virtualized hardware to attach to the virtual machine. If devices is not present, no change is made to devices. If either the device list order or data stored by the device changes when the attribute is passed, these actions are taken:

1) If there is no device in the devices list which was previously attached to the VM, that device is removed from the virtual machine. 2) Devices are updated in the devices list when they contain a valid id attribute that corresponds to an existing device. 3) Devices that do not have an id attribute are created and attached to id VM.

vm.vnc_port_wizard

It returns the next available VNC PORT and WEB VNC PORT.

Returns a dict with two keys vnc_port and vnc_web.

vm.device

vm.device.create
Arguments:
{ "type": "object", "properties": { "dtype": { "type": "string", "enum": [ "NIC", "DISK", "CDROM", "PCI", "VNC", "RAW" ] }, "vm": { "type": "integer" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "order": { "type": [ "integer", "null" ] } }, "additionalProperties": false, "title": "vmdevice_create", "default": {} }

Create a new device for the VM of id vm.

If dtype is the RAW type and a new raw file is to be created, attributes.exists will be passed as false. This means the API handles creating the raw file and raises the appropriate exception if file creation fails.

If dtype is of DISK type and a new Zvol is to be created, attributes.create_zvol will be passed as true with valid attributes.zvol_name and attributes.zvol_volsize values.

vm.device.delete
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "zvol": { "type": "boolean" }, "raw_file": { "type": "boolean" } }, "additionalProperties": false, "title": "vm_device_delete", "default": {} }

Delete a VM device of id.

vm.device.nic_attach_choices

Available choices for NIC Attach attribute.

vm.device.pptdev_choices

Available choices for PCI passthru device.

vm.device.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
vm.device.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "dtype": { "type": "string", "enum": [ "NIC", "DISK", "CDROM", "PCI", "VNC", "RAW" ] }, "vm": { "type": "integer" }, "attributes": { "type": "object", "properties": {}, "additionalProperties": true }, "order": { "type": [ "integer", "null" ] } }, "additionalProperties": false, "title": "vmdevice_create", "default": {} }

Update a VM device of id.

Pass attributes.size to resize a dtype RAW device. The raw file will be resized.

vm.device.vnc_bind_choices

Available choices for VNC Bind attribute.

vmware

vmware.create
Arguments:
{ "type": "object", "properties": { "datastore": { "type": "string" }, "filesystem": { "type": "string" }, "hostname": { "type": "string" }, "password": { "type": "string" }, "username": { "type": "string" } }, "additionalProperties": false, "title": "vmware_create", "default": {} }

Create VMWare snapshot.

hostname is a valid IP address / hostname of a VMWare host. When clustering, this is the vCenter server for the cluster.

username and password are the credentials used to authorize access to the VMWare host.

datastore is a valid datastore name which exists on the VMWare host.

vmware.dataset_has_vms
Arguments:
{ "title": "dataset", "type": "string" }
{ "type": "boolean", "title": "recursive" }

Returns "true" if dataset is configured with a VMWare snapshot

vmware.delete
Arguments:
{ "type": "integer", "title": "id" }

Delete VMWare snapshot of id.

vmware.get_datastores
Arguments:
{ "type": "object", "properties": { "hostname": { "type": "string" }, "username": { "type": "string" }, "password": { "type": "string" } }, "additionalProperties": false, "title": "vmware-creds", "default": {} }

Get datastores from VMWare.

vmware.get_virtual_machines
Arguments:
{ "type": "integer", "title": "pk" }

Returns Virtual Machines on the VMWare host identified by pk.

vmware.match_datastores_with_datasets
Arguments:
{ "type": "object", "properties": { "hostname": { "type": "string" }, "username": { "type": "string" }, "password": { "type": "string" } }, "additionalProperties": false, "title": "vmware-creds", "default": {} }

Requests datastores from vCenter server and tries to match them with local filesystems.

Returns a list of datastores, a list of local filesystems and guessed relationship between them.

{
  "id": "d51da71b-bb48-4b8b-a8f7-6046fcc892b4",
  "msg": "method",
  "method": "vmware.match_datastores_with_datasets",
  "params": [{"hostname": "10.215.7.104", "username": "root", "password": "password"}]
}

returns

{
  "datastores": [
    {
      "name": "10.215.7.102",
      "description": "NFS mount '/mnt/tank' on 10.215.7.102",
      "filesystems": ["tank"]
    },
    {
      "name": "datastore1",
      "description": "mpx.vmhba0:C0:T0:L0",
      "filesystems": []
    },
    {
      "name": "zvol",
      "description": "iSCSI extent naa.6589cfc000000b3f0a891a2c4e187594",
      "filesystems": ["tank/vol"]
    }
  ],
  "filesystems": [
    {
      "type": "FILESYSTEM",
      "name": "tank",
      "description": "NFS mount '/mnt/tank' on 10.215.7.102"
    },
    {
      "type": "VOLUME",
      "name": "tank/vol",
      "description": "iSCSI extent naa.6589cfc000000b3f0a891a2c4e187594"
    }
  ]
}
vmware.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-
vmware.update
Arguments:
{ "type": "integer", "title": "id" }
{ "type": "object", "properties": { "datastore": { "type": "string" }, "filesystem": { "type": "string" }, "hostname": { "type": "string" }, "password": { "type": "string" }, "username": { "type": "string" } }, "additionalProperties": false, "title": "vmware_create", "default": {} }

Update VMWare snapshot of id.

webdav

webdav.config
-
webdav.update
Arguments:
{ "type": "object", "properties": { "protocol": { "type": "string", "enum": [ "HTTP", "HTTPS", "HTTPHTTPS" ] }, "tcpport": { "type": "integer" }, "tcpportssl": { "type": "integer" }, "password": { "type": "string" }, "htauth": { "type": "string", "enum": [ "NONE", "BASIC", "DIGEST" ] }, "certssl": { "type": [ "integer", "null" ] } }, "additionalProperties": false, "title": "webdav_update", "default": {} }

Update Webdav Service Configuration.

protocol specifies which protocol should be used for connecting to Webdav Serivce. Value of "HTTPHTTPS" allows both HTTP and HTTPS connections to the share.

certssl is a valid id of a certificate configured in the system. This is required if HTTPS connection is desired with Webdave Service.

There are 3 types of Authentication supported with Webdav: 1) NONE - No authentication is required 2) BASIC - Password is sent over the network as plaintext 3) DIGEST - Hash of the password is sent over the network

htauth should be one of the valid types described above.

webui.image

webui.image.create
Job This endpoint is a Job. Please refer to the Jobs section for details.
A file can be uploaded to this endpoint. Please refer to the Jobs section to upload a file.
Arguments:
{ "type": "object", "properties": { "identifier": { "type": "string" } }, "additionalProperties": false, "title": "options", "default": {} }

Create a new database entry with identifier as the tag, all entries are lowercased

Then puts the file in the /var/db/system/webui/images directory

webui.image.delete
Arguments:
{ "type": "integer", "title": "id" }

Remove the database entry, and then the item if it exists

webui.image.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }
-

zfs.snapshot

zfs.snapshot.clone
Arguments:
{ "type": "object", "properties": { "snapshot": { "type": "string" }, "dataset_dst": { "type": "string" } }, "additionalProperties": false, "title": "snapshot_clone", "default": {} }

Clone a given snapshot to a new dataset.

Returns: bool: True if succeed otherwise False.

zfs.snapshot.create
Arguments:
{ "type": "object", "properties": { "dataset": { "type": "string" }, "name": { "type": "string" }, "naming_schema": { "type": "string" }, "recursive": { "type": "boolean" }, "vmware_sync": { "type": "boolean" }, "properties": { "type": "object", "properties": {}, "additionalProperties": true } }, "additionalProperties": false, "title": "snapshot_create", "default": {} }

Take a snapshot from a given dataset.

zfs.snapshot.delete
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "defer": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Delete snapshot of name id.

options.defer will defer the deletion of snapshot.

zfs.snapshot.query
Arguments:
{ "type": "array", "title": "query-filters", "default": [], "items": [ { "type": "null" } ] }
{ "type": "object", "properties": { "relationships": { "type": "boolean" }, "extend": { "type": [ "string", "null" ] }, "extend_context": { "type": [ "string", "null" ] }, "prefix": { "type": [ "string", "null" ] }, "extra": { "type": "object", "properties": {}, "additionalProperties": true }, "order_by": { "type": "array", "items": [ { "type": "null" } ] }, "select": { "type": "array", "items": [ { "type": "null" } ] }, "count": { "type": "boolean" }, "get": { "type": "boolean" }, "offset": { "type": "integer" }, "limit": { "type": "integer" } }, "additionalProperties": false, "title": "query-options", "default": {} }

Query all ZFS Snapshots with query-filters and query-options.

zfs.snapshot.remove
Arguments:
{ "type": "object", "properties": { "dataset": { "type": "string" }, "name": { "type": "string" }, "defer_delete": { "type": "boolean" } }, "additionalProperties": false, "title": "snapshot_remove", "default": {} }

Remove a snapshot from a given dataset.

Returns: bool: True if succeed otherwise False.

zfs.snapshot.rollback
Arguments:
{ "title": "id", "type": "string" }
{ "type": "object", "properties": { "recursive": { "type": "boolean" }, "recursive_clones": { "type": "boolean" }, "force": { "type": "boolean" } }, "additionalProperties": false, "title": "options", "default": {} }

Rollback to a given snapshot id.

options.recursive will destroy any snapshots and bookmarks more recent than the one specified.

options.recursive_clones is just like recursive but will also destroy any clones.

options.force will force unmount of any clones.

Websocket Events

Events are triggers that are generated under certain scenarios or at a certain period of time.

Some events can accept arguments and return results that are influenced by those arguments. Follow this format to pass arguments to events:

event_name:arg

If arg is accepted by the event, it is parsed automatically. Events that do not accept arguments only use the event name when subscribing to the event.

core.get_jobs

Updates on job changes.

This event can be subscribed to with the wildcard * as the event name.

acme.dns.authenticator.query

Sent on acme.dns.authenticator changes.

This event can be subscribed to with the wildcard * as the event name.

acme.registration.query

Sent on acme.registration changes.

This event can be subscribed to with the wildcard * as the event name.

alertservice.query

Sent on alertservice changes.

This event can be subscribed to with the wildcard * as the event name.

api_key.query

Sent on api_key changes.

This event can be subscribed to with the wildcard * as the event name.

bootenv.query

Sent on bootenv changes.

This event can be subscribed to with the wildcard * as the event name.

certificate.query

Sent on certificate changes.

This event can be subscribed to with the wildcard * as the event name.

certificateauthority.query

Sent on certificateauthority changes.

This event can be subscribed to with the wildcard * as the event name.

cloudsync.query

Sent on cloudsync changes.

This event can be subscribed to with the wildcard * as the event name.

cloudsync.credentials.query

Sent on cloudsync.credentials changes.

This event can be subscribed to with the wildcard * as the event name.

cronjob.query

Sent on cronjob changes.

This event can be subscribed to with the wildcard * as the event name.

enclosure.query

Sent on enclosure changes.

This event can be subscribed to with the wildcard * as the event name.

fcport.query

Sent on fcport changes.

This event can be subscribed to with the wildcard * as the event name.

group.query

Sent on group changes.

This event can be subscribed to with the wildcard * as the event name.

idmap.query

Sent on idmap changes.

This event can be subscribed to with the wildcard * as the event name.

initshutdownscript.query

Sent on initshutdownscript changes.

This event can be subscribed to with the wildcard * as the event name.

interface.query

Sent on interface changes.

This event can be subscribed to with the wildcard * as the event name.

ipmi.query

Sent on ipmi changes.

This event can be subscribed to with the wildcard * as the event name.

iscsi.auth.query

Sent on iscsi.auth changes.

This event can be subscribed to with the wildcard * as the event name.

iscsi.extent.query

Sent on iscsi.extent changes.

This event can be subscribed to with the wildcard * as the event name.

iscsi.initiator.query

Sent on iscsi.initiator changes.

This event can be subscribed to with the wildcard * as the event name.

iscsi.portal.query

Sent on iscsi.portal changes.

This event can be subscribed to with the wildcard * as the event name.

iscsi.target.query

Sent on iscsi.target changes.

This event can be subscribed to with the wildcard * as the event name.

iscsi.targetextent.query

Sent on iscsi.targetextent changes.

This event can be subscribed to with the wildcard * as the event name.

jail.query

Sent on jail changes.

This event can be subscribed to with the wildcard * as the event name.

kerberos.keytab.query

Sent on kerberos.keytab changes.

This event can be subscribed to with the wildcard * as the event name.

kerberos.realm.query

Sent on kerberos.realm changes.

This event can be subscribed to with the wildcard * as the event name.

keychaincredential.query

Sent on keychaincredential changes.

This event can be subscribed to with the wildcard * as the event name.

multipath.query

Sent on multipath changes.

This event can be subscribed to with the wildcard * as the event name.

plugin.query

Sent on plugin changes.

This event can be subscribed to with the wildcard * as the event name.

pool.query

Sent on pool changes.

This event can be subscribed to with the wildcard * as the event name.

pool.dataset.query

Sent on pool.dataset changes.

This event can be subscribed to with the wildcard * as the event name.

pool.dataset.userprop.query

Sent on pool.dataset.userprop changes.

This event can be subscribed to with the wildcard * as the event name.

pool.scrub.query

Sent on pool.scrub changes.

This event can be subscribed to with the wildcard * as the event name.

pool.snapshottask.query

Sent on pool.snapshottask changes.

This event can be subscribed to with the wildcard * as the event name.

replication.query

Sent on replication changes.

This event can be subscribed to with the wildcard * as the event name.

rsyncmod.query

Sent on rsyncmod changes.

This event can be subscribed to with the wildcard * as the event name.

rsynctask.query

Sent on rsynctask changes.

This event can be subscribed to with the wildcard * as the event name.

service.query

Sent on service changes.

This event can be subscribed to with the wildcard * as the event name.

sharing.afp.query

Sent on sharing.afp changes.

This event can be subscribed to with the wildcard * as the event name.

sharing.nfs.query

Sent on sharing.nfs changes.

This event can be subscribed to with the wildcard * as the event name.

sharing.smb.query

Sent on sharing.smb changes.

This event can be subscribed to with the wildcard * as the event name.

sharing.webdav.query

Sent on sharing.webdav changes.

This event can be subscribed to with the wildcard * as the event name.

smart.test.query

Sent on smart.test changes.

This event can be subscribed to with the wildcard * as the event name.

smb.sharesec.query

Sent on smb.sharesec changes.

This event can be subscribed to with the wildcard * as the event name.

staticroute.query

Sent on staticroute changes.

This event can be subscribed to with the wildcard * as the event name.

system.ntpserver.query

Sent on system.ntpserver changes.

This event can be subscribed to with the wildcard * as the event name.

tunable.query

Sent on tunable changes.

This event can be subscribed to with the wildcard * as the event name.

user.query

Sent on user changes.

This event can be subscribed to with the wildcard * as the event name.

vm.query

Sent on vm changes.

This event can be subscribed to with the wildcard * as the event name.

vm.device.query

Sent on vm.device changes.

This event can be subscribed to with the wildcard * as the event name.

vmware.query

Sent on vmware changes.

This event can be subscribed to with the wildcard * as the event name.

webui.image.query

Sent on webui.image changes.

This event can be subscribed to with the wildcard * as the event name.

zfs.dataset.query

Sent on zfs.dataset changes.

This event can be subscribed to with the wildcard * as the event name.

zfs.pool.query

Sent on zfs.pool changes.

This event can be subscribed to with the wildcard * as the event name.

zfs.snapshot.query

Sent on zfs.snapshot changes.

This event can be subscribed to with the wildcard * as the event name.

auth.sessions

Notification of new and removed sessions.

This event can be subscribed to with the wildcard * as the event name.

system

Sent on system state changes.

id=ready -- Finished boot process

id=reboot -- Started reboot process

id=shutdown -- Started shutdown process

This event can be subscribed to with the wildcard * as the event name.

alert.list

Sent on alert changes.

This event can be subscribed to with the wildcard * as the event name.

network.config

Sent on network configuration changes.

This event can be subscribed to with the wildcard * as the event name.

directoryservices.status

Sent on directory service state changes.

This event can be subscribed to with the wildcard * as the event name.

disk.query

Sent on disk changes.

This event can be subscribed to with the wildcard * as the event name.

truecommand.config

Sent on TrueCommand configuration changes.

This event can be subscribed to with the wildcard * as the event name.

zfs.pool.scan

Progress of pool resilver/scrub.

This event can be subscribed to with the wildcard * as the event name.

failover.setup

Sent when failover is being setup.

This event can be subscribed to with the wildcard * as the event name.

failover.status

Sent when failover status changes.

This event can be subscribed to with the wildcard * as the event name.

failover.disabled_reasons

Sent when the reasons for failover being disabled have changed.

This event can be subscribed to with the wildcard * as the event name.

failover.upgrade_pending

Sent when system is ready and HA upgrade is pending.

It is expected the client will react by issuing upgrade_finish call at user will.

This event can be subscribed to with the wildcard * as the event name.

failover.carp_event

Sent when a CARP state is changed.

This event can be subscribed to with the wildcard * as the event name.

system.health

Notifies of current system health which include statistics about consumption of memory and CPU, pools and

if updates are available. An integer delay argument can be specified to determine the delay on when the periodic event should be generated.

This event cannot be subscribed to with the wildcard * as the event name.

trueview.stats

Retrieve True View Statistics. An integer delay argument can be specified to determine the delay

on when the periodic event should be generated.

This event cannot be subscribed to with the wildcard * as the event name.

filesystem.file_tail_follow

Retrieve last no_of_lines specified as an integer argument for a specific path and then

any new lines as they are added. Specified argument has the format path:no_of_lines ( /var/log/messages:3 ).

no_of_lines is optional and if it is not specified it defaults to 3.

However path is required for this.

This event cannot be subscribed to with the wildcard * as the event name.

reporting.realtime

Retrieve real time statistics for CPU, network,

virtual memory and zfs arc.

This event cannot be subscribed to with the wildcard * as the event name.

reporting.processes

Retrieve currently running processes stats.

Usage: reporting.processes:{"interval": 10, "cpu_percent": 0.1, "memory_percent": 0.1}

This event cannot be subscribed to with the wildcard * as the event name.

Subscribing to Events

Events are generated by the system based on when certain conditions are met. It is not useful if the system is generating an event and there is no event listener. Listening to events is called subscribing.

A client can subscribe to all system events by specifying *. This only applies to events that accept * as a wildcard (refer to the list above for events that accept *).

Websocket Client Subscription

Request:

{
    "id": "ad4dea8f-53a8-9a5c-1825-523e218c13ca",
    "name": "*",
    "msg": "sub"
}

Response:

{
    "msg": "ready",
    "subs": ["ad4dea8f-53a8-9a5c-1825-523e218c13ca"]
}

The example above subscribes the websocket client to system events that accept * as a wildcard.

Each time an event is generated by the system the websocket client would get the event.

Event Response Example:

{
    "msg": "changed",
    "collection": "core.get_jobs",
    "id": 79,
    "fields": {
        "id": 79, "method": "zfs.pool.scrub",
        "arguments": ["vol1", "START"], "logs_path": null,
        "logs_excerpt": null,
        "progress": {"percent": 0.001258680822502356, "description": "Scrubbing", "extra": null},
        "result": null, "error": null, "exception": null, "exc_info": null,
        "state": "RUNNING", "time_started": {"$date": 1571297741181},
        "time_finished": null
    }
}

The event above was generated by the system when a pool is scrubbed.

The example below is how to subscribe to the reporting.realtime event.

Request:

{
    "id": "8592f7c2-ce2b-4466-443a-80bbae5937d9",
    "name": "reporting.realtime",
    "msg": "sub"
}

Response:

{
    "msg": "ready",
    "subs": ["8592f7c2-ce2b-4466-443a-80bbae5937d9"]
}

Event Response Example:

{
    "msg": "added", "collection": "reporting.realtime",
    "fields": {
        "virtual_memory": {
            "total": 4784615424, "available": 854155264, "percent": 82.1,
            "used": 3779424256, "free": 136634368, "active": 894599168,
            "inactive": 717520896, "buffers": 0, "cached": 0,
            "shared": 188002304, "wired": 2884825088
        },
        "cpu": {"temperature": {}},
        "interfaces": {
            "em0": {
                "received_bytes": 1068597254, "received_bytes_last": 1068597254,
                "sent_bytes": 78087857, "sent_bytes_last": 78087857
            },
            "lo0": {
                "received_bytes": 358364554, "received_bytes_last": 358364554,
                "sent_bytes": 358360787, "sent_bytes_last": 358360787
            }
        }
    }
}

The example below is how to subscribe to jobs.

Request:

{
    "id": "19922f7c2-ce2b-4455-443a-80bbae5937a2",
    "name": "core.get_jobs",
    "msg": "sub"
}

Response:

{
    "msg": "ready",
    "subs": ["19922f7c2-ce2b-4455-443a-80bbae5937a2"]
}

Event Response Example:

{
    "msg": "added", "collection": "core.get_jobs", "id": 26,
    "fields": {
        "id": 26, "method": "jail.stop", "arguments": ["abc"],
        "logs_path": null, "logs_excerpt": null,
        "progress": {"percent": null, "description": null, "extra": null},
        "result": null, "error": null, "exception": null, "exc_info": null,
        "state": "WAITING", "time_started": {"$date": 1571305262662},
        "time_finished": null
    }
}

The event above was generated when a jail was stopped and a job for stopping the jail started. The event response shows that system has registered the job and the job is waiting to be executed.

Websocket Client Unsubscription

After the client has consumed the information required and no more updates are required, an event can be unsubscribed as shown here:

Request:

{
    "id": "8592f7c2-ce2b-4466-443a-80bbae5937d9",
    "msg": "unsub"
}

The server does not send a response for this call. This example unsubscribes from the reporting.realtime event that was subscribed to above. The id is the same value sent when subscribing to the event.

Jobs

Tasks which require significant time to execute or process a significant amount of input or output are tagged as jobs. When a client connects to an endpoint marked as a job, they receive a job id from the endpoint. With this job id, the client can query the status of the job to see the progress and status. Errors are shown in the output, or the output contains the result returned by the endpoint on completion.

e.g. ws://truenas.domain/websocket

Example of connecting to endpoint marked as a job

Client connects to websocket endpoint and sends a connect message.

{
    "id": "6841f242-840a-11e6-a437-00e04d680384",
    "msg": "method",
    "method": "jail.start",
    "params": ["jail_name"]
}

Server answers with job_id.

{
  "msg": "result",
  "id": "c0bb5952-fc60-232a-3d6c-a47961b771a5",
  "result": 53
}

Query Job Status

Job status can be queried with the core.get_jobs method.

Request:

{
  "id": "d8e715be-6bc7-11e6-8c28-00e04d680384",
  "msg": "method",
  "method": "core.get_jobs",
  "params": [[["id", "=", 53]]]
}

Response:

{
  "id": "d8e715be-6bc7-11e6-8c28-00e04d680384",
  "msg": "result",
  "result": [{'id': 53, 'method': 'jail.start', 'arguments': ['abc'], 'logs_path': None, 'logs_excerpt': None, 'progress': {'percent': None, 'description': None, 'extra': None}, 'result': True, 'error': None, 'exception': None, 'exc_info': None, 'state': 'SUCCESS', 'time_started': {"$date": 1571300596053}, 'time_finished': null}]
}

Uploading / Downloading Files

There are some jobs which require input or output as files which can be uploaded or downloaded.

Downloading a File

If a job gives a file as an output, this endpoint is to be used to download the output file.

Request:

{
    "id": "d8e715be-6bc7-11e6-8c28-00e04d680384",
    "msg": "method",
    "method": "core.download",
    "params": ["config.save", [{}], "freenas-FreeNAS-11.3-MASTER-201910090828-20191017122016.db"]
}

Response:

{
    "id": "cdc8740a-336b-b0cd-b850-47568fe94223",
    "msg": "result",
    "result": [86, "/_download/86?auth_token=9WIqYg4jAYEOGQ4g319Bkr64Oj8CZk1VACfyN68M7hgjGTdeSSgZjSf5lJEshS8M"]
}

In the response, the first value 86 is the job id for config.save. This can be used to query the status of the job. The second value is a REST endpoint used to download the file.

The download endpoint has a special format:

http://system_ip/_download/{job_id}?auth_token={token}

job_id and token are parameters being passed.

core.download takes responsibility for providing the download URI with the job_id and token values.

Note: 1) Job output is not buffered, so execution would be blocked if a file download is not started. 2) File download must begin within 60 seconds or the job is canceled. 3) The file can only be downloaded once.

Uploading a File

Files can be uploaded via HTTP POST request only. The upload endpoint is:

http://system_ip/_upload

It expects two values as form data, data and file.

data is JSON-encoded data. It must be the first parameter provided and in this format:

::: json
{
    "method": "config.upload",
    "params": []
}

file is the URI of the file to download.

This example uses curl,

Request:

curl -X POST -u root:freenas -H "Content-Type: multipart/form-data" -F 'data={"method": "config.upload", "params": []}' -F "file=@/home/user/Desktop/config" http://system_ip/_upload/

Response:

{"job_id": 20}