API¶
tdclient.api.API
class is an internal class represents API.
tdclient.api¶
-
class
tdclient.api.
API
(apikey=None, user_agent=None, endpoint=None, headers=None, retry_post_requests=False, max_cumul_retry_delay=600, http_proxy=None, **kwargs)[source]¶ Bases:
tdclient.bulk_import_api.BulkImportAPI
,tdclient.connector_api.ConnectorAPI
,tdclient.database_api.DatabaseAPI
,tdclient.export_api.ExportAPI
,tdclient.import_api.ImportAPI
,tdclient.job_api.JobAPI
,tdclient.partial_delete_api.PartialDeleteAPI
,tdclient.result_api.ResultAPI
,tdclient.schedule_api.ScheduleAPI
,tdclient.server_status_api.ServerStatusAPI
,tdclient.table_api.TableAPI
,tdclient.user_api.UserAPI
Internal API class
- Parameters
apikey (str) – the API key of Treasure Data Service. If None is given, TD_API_KEY will be used if available.
user_agent (str) – custom User-Agent.
endpoint (str) – custom endpoint URL. If None is given, TD_API_SERVER will be used if available.
headers (dict) – custom HTTP headers.
retry_post_requests (bool) – Specify whether allowing API client to retry POST requests. False by default.
max_cumul_retry_delay (int) – maximum retry limit in seconds. 600 seconds by default.
http_proxy (str) – HTTP proxy setting. if None is given, HTTP_PROXY will be used if available.
-
DEFAULT_ENDPOINT
= 'https://api.treasuredata.com/'¶
-
DEFAULT_IMPORT_ENDPOINT
= 'https://api-import.treasuredata.com/'¶
-
property
apikey
¶
-
property
endpoint
¶
tdclient.bulk_import_api¶
-
class
tdclient.bulk_import_api.
BulkImportAPI
[source]¶ Bases:
object
Enable bulk importing of data to the targeted database and table.
This class is inherited by
tdclient.api.API
.-
bulk_import_delete_part
(name, part_name, params=None)[source]¶ Delete the imported information with the specified name.
- Parameters
name (str) – Bulk import name.
part_name (str) – Bulk import part name.
params (dict, optional) – Extra parameters.
- Returns
True if succeeded.
-
bulk_import_error_records
(name, params=None)[source]¶ List the records that have errors under the specified bulk import name.
- Parameters
name (str) – Bulk import name.
params (dict, optional) – Extra parameters.
- Yields
Row of the data
-
bulk_import_upload_file
(name, part_name, format, file, **kwargs)[source]¶ Upload a file with bulk import having the specified name.
- Parameters
name (str) – Bulk import name.
part_name (str) – Bulk import part name.
format (str) – Format name. {msgpack, json, csv, tsv}
file (str or file-like) – the name of a file, or a file-like object, containing the data
**kwargs – Extra arguments.
There is more documentation on format, file and **kwargs at file import parameters.
In particular, for “csv” and “tsv” data, you can change how data columns are parsed using the
dtypes
andconverters
arguments.dtypes
is a dictionary used to specify a datatype for individual columns, for instance{"col1": "int"}
. The available datatypes are"bool"
,"float"
,"int"
,"str"
and"guess"
. If a column is also mentioned inconverters
, then the function will be used, NOT the datatype.converters
is a dictionary used to specify a function that will be used to parse individual columns, for instace{"col1", int}
.
The default behaviour is
"guess"
, which makes a best-effort to decide the column datatype. See file import parameters for more details.
-
bulk_import_upload_part
(name, part_name, stream, size)[source]¶ Upload bulk import having the specified name and part in the path.
- Parameters
name (str) – Bulk import name.
part_name (str) – Bulk import part name.
stream (str or file-like) – Byte string or file-like object contains the data
size (int) – The length of the data.
-
commit_bulk_import
(name, params=None)[source]¶ Commit the bulk import information having the specified name.
- Parameters
name (str) – Bulk import name.
params (dict, optional) – Extra parameters.
- Returns
True if succeeded.
-
create_bulk_import
(name, db, table, params=None)[source]¶ Enable bulk importing of data to the targeted database and table and stores it in the default resource pool. Default expiration for bulk import is 30days.
- Parameters
name (str) – Name of the bulk import.
db (str) – Name of target database.
table (str) – Name of target table.
params (dict, optional) – Extra parameters.
- Returns
True if succeeded
-
delete_bulk_import
(name, params=None)[source]¶ Delete the imported information with the specified name
- Parameters
name (str) – Name of bulk import.
params (dict, optional) – Extra parameters.
- Returns
True if succeeded
-
freeze_bulk_import
(name, params=None)[source]¶ Freeze the bulk import with the specified name.
- Parameters
name (str) – Bulk import name.
params (dict, optional) – Extra parameters.
- Returns
True if succeeded.
-
list_bulk_import_parts
(name, params=None)[source]¶ Return the list of available parts uploaded through
bulk_import_upload_part()
.- Parameters
name (str) – Name of bulk import.
params (dict, optional) – Extra parameteres.
- Returns
The list of bulk import part name.
- Return type
[str]
-
list_bulk_imports
(params=None)[source]¶ Return the list of available bulk imports :param params: Extra parameters. :type params: dict, optional
- Returns
The list of available bulk import details.
- Return type
[dict]
-
perform_bulk_import
(name, params=None)[source]¶ Execute a job to perform bulk import with the indicated priority using the resource pool if indicated, else it will use the account’s default.
- Parameters
name (str) – Bulk import name.
params (dict, optional) – Extra parameters.
- Returns
Job ID
- Return type
str
-
show_bulk_import
(name)[source]¶ Show the details of the bulk import with the specified name
- Parameters
name (str) – Name of bulk import.
- Returns
Detailed information of the bulk import.
- Return type
dict
-
tdclient.connector_api¶
-
class
tdclient.connector_api.
ConnectorAPI
[source]¶ Bases:
object
Access Data Connector API which handles Data Connector.
This class is inherited by
tdclient.api.API
.-
connector_create
(name, database, table, job, params=None)[source]¶ Create a Data Connector session.
- Parameters
name (str) – name of the connector job
database (str) – name of the database to perform connector job
table (str) – name of the table to perform connector job
job (dict) –
dict
representation of load.ymlparams (dict, optional) –
Extra parameters
- config (str):
Embulk configuration as JSON format. See also https://www.embulk.org/docs/built-in.html#embulk-configuration-file-format
- cron (str, optional):
Schedule of the query. {
"@daily"
,"@hourly"
,"10 * * * *"
(custom cron)} See also: https://support.treasuredata.com/hc/en-us/articles/360001451088-Scheduled-Jobs-Web-Console
- delay (int, optional):
A delay ensures all buffered events are imported before running the query. Default: 0
- database (str):
Target databse for the Data Connector session
- name (str):
Name of the Data Connector session
- table (str):
Target table for the Data Connector session
- time_column (str, optional):
Column in the table for registering config.out.time
- timezone (str):
Timezone for scheduled Data Connector session. See here for list of supported timezones https://gist.github.com/frsyuki/4533752
- Returns
dict
-
connector_delete
(name)[source]¶ Delete a Data Connector session.
- Parameters
name (str) – name of the connector job
- Returns
dict
-
connector_guess
(job)[source]¶ Guess the Data Connector configuration
- Parameters
job (dict) –
dict
representation of seed.yml See Also: https://www.embulk.org/docs/built-in.html#guess-executor- Returns
The configuration of the Data Connector.
- Return type
dict
Examples
>>> config = { ... "in": { ... "type": "s3", ... "bucket": "your-bucket", ... "path_prefix": "logs/csv-", ... "access_key_id": "YOUR-AWS-ACCESS-KEY", ... "secret_access_key": "YOUR-AWS-SECRET-KEY" ... }, ... "out": {"mode": "append"}, ... "exec": {"guess_plugins": ["json", "query_string"]}, ... } >>> td.api.connector_guess(config) {'config': {'in': {'type': 's3', 'bucket': 'your-bucket', 'path_prefix': 'logs/csv-', 'access_key_id': 'YOUR-AWS-ACCESS-KEY', 'secret_access_key': 'YOU-AWS-SECRET-KEY', 'parser': {'charset': 'UTF-8', 'newline': 'LF', 'type': 'csv', 'delimiter': ',', 'quote': '"', 'escape': '"', 'trim_if_not_quoted': False, 'skip_header_lines': 1, 'allow_extra_columns': False, 'allow_optional_columns': False, 'columns': [{'name': 'sepal.length', 'type': 'double'}, {'name': 'sepal.width', 'type': 'double'}, {'name': 'petal.length', 'type': 'double'}, {'name': 'petal.width', 'type': 'string'}, {'name': 'variety', 'type': 'string'}]}}, 'out': {'mode': 'append'}, 'exec': {'guess_plugin': ['json', 'query_string']}, 'filters': [{'rules': [{'rule': 'upper_to_lower'}, {'pass_types': ['a-z', '0-9'], 'pass_characters': '_', 'replace': '_', 'rule': 'character_types'}, {'pass_types': ['a-z'], 'pass_characters': '_', 'prefix': '_', 'rule': 'first_character_types'}, {'rule': 'unique_number_suffix', 'max_length': 128}], 'type': 'rename'}, {'from_value': {'mode': 'upload_time'}, 'to_column': {'name': 'time'}, 'type': 'add_time'}]}}
-
connector_history
(name)[source]¶ Show the list of the executed jobs information for the Data Connector job.
- Parameters
name (str) – name of the connector job
- Returns
list
-
connector_issue
(db, table, job)[source]¶ Create a Data Connector job.
- Parameters
db (str) – name of the database to perform connector job
table (str) – name of the table to perform connector job
job (dict) –
dict
representation of load.yml
- Returns
job Id
- Return type
str
-
connector_preview
(job)[source]¶ Show the preview of the Data Connector job.
- Parameters
job (dict) –
dict
representation of load.yml- Returns
dict
-
connector_run
(name, **kwargs)[source]¶ Create a job to execute Data Connector session.
- Parameters
name (str) – name of the connector job
**kwargs (optional) –
Extra parameters.
- scheduled_time (int):
Time in Unix epoch format that would be set as TD_SCHEDULED_TIME.
- domain_key (str):
Job domain key which is assigned to a single job.
- Returns
dict
-
connector_show
(name)[source]¶ Show a specific Data Connector session information.
- Parameters
name (str) – name of the connector job
- Returns
dict
-
connector_update
(name, job)[source]¶ Update a specific Data Connector session.
- Parameters
name (str) – name of the connector job
job (dict) –
dict
representation of load.yml. For detailed format, see also: https://www.embulk.org/docs/built-in.html#embulk-configuration-file-format
- Returns
dict
-
tdclient.database_api¶
-
class
tdclient.database_api.
DatabaseAPI
[source]¶ Bases:
object
Access to Database of Treasure Data Service.
This class is inherited by
tdclient.api.API
.-
create_database
(db, params=None)[source]¶ Create a new database with the given name.
- Parameters
db (str) – Target database name.
params (dict) – Extra parameters.
- Returns
True if succeeded.
- Return type
bool
-
tdclient.export_api¶
-
class
tdclient.export_api.
ExportAPI
[source]¶ Bases:
object
Access to Export API.
This class is inherited by
tdclient.api.API
.-
export_data
(db, table, storage_type, params=None)[source]¶ Creates a job to export the contents from the specified database and table names.
- Parameters
db (str) – Target database name.
table (str) – Target table name.
storage_type (str) – Name of storage type. e.g. “s3”
params (dict) –
Extra parameters. Assuming the following keys:
- access_key_id (str):
ID to access the information to be exported.
- secret_access_key (str):
Password for the access_key_id.
- file_prefix (str, optional):
Filename of exported file. Default: “<database_name>/<table_name>”
- file_format (str, optional):
File format of the information to be exported. {“jsonl.gz”, “tsv.gz”, “json.gz”}
- from (int, optional):
From Time of the data to be exported in Unix epoch format.
- to (int, optional):
End Time of the data to be exported in Unix epoch format.
- assume_role (str, optional):
Assume role.
- bucket (str):
Name of bucket to be used.
- domain_key (str, optional):
Job domain key.
- pool_name (str, optional):
For Presto only. Pool name to be used, if not specified, default pool would be used.
- Returns
Job ID.
- Return type
str
-
tdclient.import_api¶
-
class
tdclient.import_api.
ImportAPI
[source]¶ Bases:
object
Import data into Treasure Data Service.
This class is inherited by
tdclient.api.API
.-
import_data
(db, table, format, bytes_or_stream, size, unique_id=None)[source]¶ Import data into Treasure Data Service
This method expects data from a file-like object formatted with “msgpack.gz”.
- Parameters
db (str) – name of a database
table (str) – name of a table
format (str) – format of data type (e.g. “msgpack.gz”)
bytes_or_stream (str or file-like) – a byte string or a file-like object contains the data
size (int) – the length of the data
unique_id (str) – a unique identifier of the data
- Returns
float represents the elapsed time to import data
-
import_file
(db, table, format, file, unique_id=None, **kwargs)[source]¶ Import data into Treasure Data Service, from an existing file on filesystem.
This method will decompress/deserialize records from given file, and then convert it into format acceptable from Treasure Data Service (“msgpack.gz”). This method is a wrapper function to import_data.
- Parameters
db (str) – name of a database
table (str) – name of a table
format (str) – format of data type (e.g. “msgpack”, “json”)
file (str or file-like) – a name of a file, or a file-like object contains the data
unique_id (str) – a unique identifier of the data
- Returns
float represents the elapsed time to import data
-
tdclient.job_api¶
-
class
tdclient.job_api.
JobAPI
[source]¶ Bases:
object
Access to Job API
This class is inherited by
tdclient.api.API
.-
job_result
(job_id)[source]¶ Return the job result.
- Parameters
job_id (int) – Job ID
- Returns
Job result in
list
-
job_result_each
(job_id)[source]¶ Yield a row of the job result.
- Parameters
job_id (int) – Job ID
- Yields
Row in a result
-
job_result_format
(job_id, format)[source]¶ Return the job result with specified format.
- Parameters
job_id (int) – Job ID
format (str) – Output format of the job result information. “json” or “msgpack”
- Returns
The query result of the specified job in.
-
job_result_format_each
(job_id, format)[source]¶ Yield a row of the job result with specified format.
- Parameters
job_id (int) – job ID
format (str) – Output format of the job result information. “json” or “msgpack”
- Yields
The query result of the specified job in.
-
job_status
(job_id)[source]¶ “Show job status :param job_id: job ID :type job_id: str
- Returns
The status information of the given job id at last execution.
-
kill
(job_id)[source]¶ Stop the specific job if it is running.
- Parameters
job_id (str) – Job Id to kill
- Returns
Job status before killing
-
list_jobs
(_from=0, to=None, status=None, conditions=None)[source]¶ Show the list of Jobs.
- Parameters
_from (int) – Gets the Job from the nth index in the list. Default: 0
to (int, optional) – Gets the Job up to the nth index in the list. By default, the first 20 jobs in the list are displayed
status (str, optional) – Filter by given status. {“queued”, “running”, “success”, “error”}
conditions (str, optional) – Condition for
TIMESTAMPDIFF()
to search for slow queries. Avoid using this parameter as it can be dangerous.
- Returns
a list of
dict
which represents a job
-
query
(q, type='hive', db=None, result_url=None, priority=None, retry_limit=None, **kwargs)[source]¶ Create a job for given query.
- Parameters
q (str) – Query string.
type (str) – Query type. hive, presto, bulkload. Default: hive
db (str) – Database name.
result_url (str) – Result output URL. e.g.,
postgresql://<username>:<password>@<hostname>:<port>/<database>/<table>
priority (int or str) – Job priority. In str, “Normal”, “Very low”, “Low”, “High”, “Very high”. In int, the number in the range of -2 to 2.
retry_limit (int) – Automatic retry count.
**kwargs – Extra options.
- Returns
Job ID issued for the query
- Return type
str
-
show_job
(job_id)[source]¶ Return detailed information of a Job.
- Parameters
job_id (str) – job ID
- Returns
Detailed information of a job
- Return type
dict
-
JOB_PRIORITY
= {'HIGH': 1, 'LOW': -1, 'NORM': 0, 'NORMAL': 0, 'VERY HIGH': 2, 'VERY LOW': -2, 'VERY-HIGH': 2, 'VERY-LOW': -2, 'VERY_HIGH': 2, 'VERY_LOW': -2}¶
-
tdclient.partial_delete_api¶
-
class
tdclient.partial_delete_api.
PartialDeleteAPI
[source]¶ Bases:
object
Create a job to partially delete the contents of the table with the given time range.
This class is inherited by
tdclient.api.API
.-
partial_delete
(db, table, to, _from, params=None)[source]¶ Create a job to partially delete the contents of the table with the given time range.
- Parameters
db (str) – Target database name.
table (str) – Target table name.
to (int) – Time in Unix Epoch format indicating the End date and time of the data to be deleted. Should be set only by the hour. Minutes and seconds values will not be accepted.
_from (int) – Time in Unix Epoch format indicating the Start date and time of the data to be deleted. Should be set only by the hour. Minutes and seconds values will not be accepted.
params (dict, optional) –
Extra parameters.
- pool_name (str, optional):
Indicates the resource pool to execute this job. If not provided, the account’s default resource pool would be used.
- domain_key (str, optional):
Domain key that will be assigned to the partial delete job to be created
- Returns
Job ID.
- Return type
str
-
tdclient.result_api¶
-
class
tdclient.result_api.
ResultAPI
[source]¶ Bases:
object
Access to Result API.
This class is inherited by
tdclient.api.API
.-
create_result
(name, url, params=None)[source]¶ Create a new authentication with the specified name.
- Parameters
name (str) – Authentication name.
url (str) – Url of the authentication to be created. e.g. “ftp://test.com/”
params (dict, optional) – Extra parameters.
- Returns
True if succeeded.
- Return type
bool
-
tdclient.schedule_api¶
-
class
tdclient.schedule_api.
ScheduleAPI
[source]¶ Bases:
object
Access to Schedule API
This class is inherited by
tdclient.api.API
.-
create_schedule
(name, params=None)[source]¶ Create a new scheduled query with the specified name.
- Parameters
name (str) – Scheduled query name.
params (dict, optional) –
Extra parameters.
- type (str):
Query type. {“presto”, “hive”}. Default: “hive”
- database (str):
Target database name.
- timezone (str):
Scheduled query’s timezone. e.g. “UTC” For details, see also: https://gist.github.com/frsyuki/4533752
- cron (str, optional):
Schedule of the query. {
"@daily"
,"@hourly"
,"10 * * * *"
(custom cron)} See also: https://support.treasuredata.com/hc/en-us/articles/360001451088-Scheduled-Jobs-Web-Console
- delay (int, optional):
A delay ensures all buffered events are imported before running the query. Default: 0
- query (str):
Is a language used to retrieve, insert, update and modify data. See also: https://support.treasuredata.com/hc/en-us/articles/360012069493-SQL-Examples-of-Scheduled-Queries
- priority (int, optional):
Priority of the query. Range is from -2 (very low) to 2 (very high). Default: 0
- retry_limit (int, optional):
Automatic retry count. Default: 0
- engine_version (str, optional):
Engine version to be used. If none is specified, the account’s default engine version would be set. {“stable”, “experimental”}
- pool_name (str, optional):
For Presto only. Pool name to be used, if not specified, default pool would be used.
- result (str, optional):
Location where to store the result of the query. e.g. ‘tableau://user:password@host.com:1234/datasource’
- Returns
Start date time.
- Return type
datetime.datetime
-
delete_schedule
(name)[source]¶ Delete the scheduled query with the specified name.
- Parameters
name (str) – Target scheduled query name.
- Returns
Tuple of cron and query.
- Return type
(str, str)
-
history
(name, _from=0, to=None)[source]¶ Get the history details of the saved query for the past 90days.
- Parameters
name (str) – Target name of the scheduled query.
_from (int, optional) – Indicates from which nth record in the run history would be fetched. Default: 0. Note: Count starts from zero. This means that the first record in the list has a count of zero.
to (int, optional) – Indicates up to which nth record in the run history would be fetched. Default: 20
- Returns
History of the scheduled query.
- Return type
dict
-
list_schedules
()[source]¶ Get the list of all the scheduled queries.
- Returns
str, cron:str, query:str, database:str, result_url:str)]
- Return type
[(name
-
run_schedule
(name, time, num=None)[source]¶ Execute the specified query.
- Parameters
name (str) – Target scheduled query name.
time (int) – Time in Unix epoch format that would be set as TD_SCHEDULED_TIME
num (int, optional) – Indicates how many times the query will be executed. Value should be 9 or less. Default: 1
- Returns
[(job_id:int, type:str, scheduled_at:str)]
- Return type
list of tuple
-
update_schedule
(name, params=None)[source]¶ Update the scheduled query.
- Parameters
name (str) – Target scheduled query name.
params (dict) –
Extra parameteres.
- type (str):
Query type. {“presto”, “hive”}. Default: “hive”
- database (str):
Target database name.
- timezone (str):
Scheduled query’s timezone. e.g. “UTC” For details, see also: https://gist.github.com/frsyuki/4533752
- cron (str, optional):
Schedule of the query. {
"@daily"
,"@hourly"
,"10 * * * *"
(custom cron)} See also: https://support.treasuredata.com/hc/en-us/articles/360001451088-Scheduled-Jobs-Web-Console
- delay (int, optional):
A delay ensures all buffered events are imported before running the query. Default: 0
- query (str):
Is a language used to retrieve, insert, update and modify data. See also: https://support.treasuredata.com/hc/en-us/articles/360012069493-SQL-Examples-of-Scheduled-Queries
- priority (int, optional):
Priority of the query. Range is from -2 (very low) to 2 (very high). Default: 0
- retry_limit (int, optional):
Automatic retry count. Default: 0
- engine_version (str, optional):
Engine version to be used. If none is specified, the account’s default engine version would be set. {“stable”, “experimental”}
- pool_name (str, optional):
For Presto only. Pool name to be used, if not specified, default pool would be used.
- result (str, optional):
Location where to store the result of the query. e.g. ‘tableau://user:password@host.com:1234/datasource’
-
tdclient.server_status_api¶
-
class
tdclient.server_status_api.
ServerStatusAPI
[source]¶ Bases:
object
Access to Server Status API
This class is inherited by
tdclient.api.API
.
tdclient.table_api¶
-
class
tdclient.table_api.
TableAPI
[source]¶ Bases:
object
Access to Table API
This class is inherited by
tdclient.api.API
.-
change_database
(db, table, dest_db)[source]¶ Move a target table from it’s original database to new destination database.
- Parameters
db (str) – Target database name.
table (str) – Target table name.
dest_db (str) – Destination database name.
- Returns
True if succeeded
- Return type
bool
-
create_log_table
(db, table)[source]¶ Create a new table in the database and registers it in PlazmaDB.
- Parameters
db (str) – Target database name.
table (str) – Target table name.
- Returns
True if succeeded.
- Return type
bool
-
delete_table
(db, table)[source]¶ Delete the specified table.
- Parameters
db (str) – Target database name.
table (str) – Target table name.
- Returns
Type information of the table (e.g. “log”).
- Return type
str
-
list_tables
(db)[source]¶ Gets the list of table in the database.
- Parameters
db (str) – Target database name.
- Returns
Detailed table information.
- Return type
dict
Examples
>>> td.api.list_tables("my_db") { 'iris': {'id': 21039862, 'name': 'iris', 'estimated_storage_size': 1236, 'counter_updated_at': '2019-09-18T07:14:28Z', 'last_log_timestamp': datetime.datetime(2019, 1, 30, 5, 34, 42, tzinfo=tzutc()), 'delete_protected': False, 'created_at': datetime.datetime(2019, 1, 30, 5, 34, 42, tzinfo=tzutc()), 'updated_at': datetime.datetime(2019, 1, 30, 5, 34, 46, tzinfo=tzutc()), 'type': 'log', 'include_v': True, 'count': 150, 'schema': [['sepal_length', 'double', 'sepal_length'], ['sepal_width', 'double', 'sepal_width'], ['petal_length', 'double', 'petal_length'], ['petal_width', 'double', 'petal_width'], ['species', 'string', 'species']], 'expire_days': None, 'last_import': datetime.datetime(2019, 9, 18, 7, 14, 28, tzinfo=tzutc())}, }
-
swap_table
(db, table1, table2)[source]¶ Swap the two specified tables with each other belonging to the same database and basically exchanges their names.
- Parameters
db (str) – Target database name
table1 (str) – First target table for the swap.
table2 (str) – Second target table for the swap.
- Returns
True if succeeded.
- Return type
bool
-
tail
(db, table, count, to=None, _from=None, block=None)[source]¶ Get the contents of the table in reverse order based on the registered time (last data first).
- Parameters
db (str) – Target database name.
table (str) – Target table name.
count (int) – Number for record to show up from the end.
to – Deprecated parameter.
_from – Deprecated parameter.
block – Deprecated parameter.
- Returns
Contents of the table.
- Return type
[dict]
-
update_expire
(db, table, expire_days)[source]¶ Update the expire days for the specified table
- Parameters
db (str) – Target database name.
table (str) – Target table name.
expire_days (int) – Number of days where the contents of the specified table would expire.
- Returns
True if succeeded.
- Return type
bool
-
update_schema
(db, table, schema_json)[source]¶ Update the table schema.
- Parameters
db (str) – Target database name.
table (str) – Target table name.
schema_json (str) – Schema format JSON string. See also: ~`Client.update_schema` e.g. ‘[[“sep_len”, “long”, “sep_len”], [“sep_wid”, “long”, “sep_wid”]]’
- Returns
True if succeeded.
- Return type
bool
-