Udb server interface
aq_udb [-h] Global_Opt Export_Spec|Order_Spec|Mnt_Spec
Global_Opt:
[-test] [-verb] [-stat]
[-spec UdbSpec | -db DbName]
[-server AdrSpec [AdrSpec ...]]
[-local]
Export_Spec:
-exp [DbName:]TabName | -cnt [DbName:]TabName | -scn [DbName:]TabName
[-seed RandSeed]
[-lim_usr Num] [-lim_rec Num]
[-var ColName Val]
[-pp TabName
[-bvar ColName Val]
[-eval ColName Expr]
[-filt FilterSpec]
[-goto DestSpec]
[-end_of_scan DestSpec]
-endpp]
[-bvar ColName Val]
[-eval ColName Expr]
[-filt FilterSpec]
[-goto DestSpec]
[-mod ModSpec [ModSrc]]
[-sort[,AtrLst] [ColName ...] [-top Num]]
[-o[,AtrLst] File] [-c ColName [ColName ...]]
Order_Spec:
-ord[,AtrLst] [DbName:]TabName [ColName ...]
Mnt_Spec:
-clr [DbName:]TabName | -probe [DbName:]
aq_udb
is a client of the Udb server.
It is used to send command to the server (or a pool of servers)
to manipulate and/or export the data held by the server.
It can also instruct the server to clear a portion or all of the held
data.
Data manipulation can be done using builtin options or through a custom module that is dynamically loaded on the server side.
Note: Data import to the Udb server is done by aq_pp.
-test
Test command line arguments and exit.
If specified twice (-test -test
), a more throughout test will be
attempted. For example, the program will try to
connect to Udb in test mode.
-verb
-stat
Print a record count summary line to stderr at the end of processing. The line has the form:
aq_udb: rec=Count
-spec UdbSpec
| -db DbName
Set the Udb spec file to UdbSpec
.
Alternatively, “-db DbName
” indirectly sets the spec file to
to ”.conf/DbName
.spec” in the current work directory.
If neither option is given, DbName
can be given in a later
-exp, -cnt, -scn, -ord or -clr option.
If no spec file is given at all, “udb.spec” in the current work directory
is assumed.
The spec file contains server IPs (or domain names) and table/vector definitions. See udb.spec for details.
-server AdrSpec [AdrSpec ...]
AdrSpec
has the form IP_or_Domain[|IP_or_Domain_Alt][:Port]
.
See udb.spec for details.-local
-exp [DbName:]TabName
TabName
.
TabName
refers to a table/vector defined in the Udb spec file
(see udb.spec).
To export the “PKEY” (bucket key) only, specify a ”.” (a dot) as TabName
.
Optional DbName
sets the Udb spec file as in the -db option.-cnt [DbName:]TabName
TabName
.
TabName
refers to a table/vector defined in the Udb spec file
(see udb.spec).
To do a “PKEY” (bucket key) count only, specify a ”.” (a dot) as TabName
.
Optional DbName
defines UdbSpec indirectly as in the -db option.-scn [DbName:]TabName
Scan data in TabName
.
TabName
refers to a table/vector defined in the Udb spec file
(see udb.spec).
To scan the user buckets only, specify a ”.” (a dot) as TabName
.
Optional DbName
sets the Udb spec file as in the -db option.
There is no default output. However, if used with a module (see -mod), the module can optionally output custom data. This option is typically used with certain data inspection/modification rules/module.
-seed RandSeed
$Random
-eval builtin variable.-lim_usr Num
-lim_rec Num
-var ColName Val
Set the value of the Var vector column ColName
to Val
.
A Var vector must be defined in the Udb spec file and ColName
must be a column in that table.
See udb.spec for details.
Note that a string Val
must be quoted,
see String Constant spec for details.
Example:
$ aq_udb ... -var Var1 0 ...
-bvar ColName Val
Same as -var except that the column is set to Val
repeatedly
in each bucket before other processing rules are executed.
Note that a string Val
must be quoted,
see String Constant spec for details.
This rule can also be used within a -pp group. In this case,
ColName
is set to Val
in each bucket before other pre-processing
rules are executed.
See Data Processing Steps for details on these usages.
Example:
$ aq_udb ... -pp -bvar Var1 0 ...
-eval ColName Expr
For each row in the table/vector being exported/counted/scanned,
evaluate expression Expr
and place the result in a column identified
by ColName
. The column can be part of the target table or the Var vector.
This rule can also be used within a -pp group. In this case,
the target table becomes the -pp
table.
Note that -eval
rules inside -pp groups are evaluated before those
for the target table/vector. See Data Processing Steps for details.
Expr
is the expression to evaluate.
Data type of the evaluated result must be compatible with the data type of
the target column. For example, string result for a string column and
numeric result for a numeric column (there is no automatic type conversion;
however, explicit conversion can be done using the To*()
functions
described below).
Operands in the expression can be columns from the target table/vector,
columns from other user vectors, columns from the Var vector,
constants, builtin variables and functions.
VecName.ColName
format. For the Var vector, VecName
is optional
unless ColName
also exists in the target.ToIP()
, ToF()
, ToI()
and ToS()
.Builtin variables:
$Random
$RowNum
Standard functions:
See aq-emod for a list of supported functions.
Example:
$ aq_udb -exp Test -eval c_delta 'c1 - c2'
-filt FilterSpec
For each row in the table/vector being exported/counted/scanned,
evaluate FilterSpec
and use the result to determine whether to
keep the data row.
The result can also be used in a -if/-elif/-endif
for
Rule Execution Controls.
This rule can also be used within a -pp group. In this case,
the target table becomes the -pp
table.
Note that -filt
rules inside -pp groups are evaluated before those
for the target table/vector. See Data Processing Steps for details.
FilterSpec
is the filter to evaluate.
It has the basic form [!] LHS [<compare> RHS]
where:
!
negates the result of the comparison.
It is recommended that !(...)
be used to clarify the intended
operation even though it is not required.VecName.ColName
format. For the Var vector, VecName
is optional
unless ColName
also exists in the target.==
, >
, <
, >=
, <=
-
LHS and RHS comparison.~==
, ~>
, ~<
, ~>=
, ~<=
-
LHS and RHS case insensitive comparison; string type only.!=
, !~=
-
Negation of the above equal operators.&=
-
Perform a “(LHS & RHS) == RHS” check; numeric types only.!&=
-
Negation of the above.&
-
Perform a “(LHS & RHS) != 0” check; numeric types only.!&
-
Negation of the above.More complex expression can be constructed by using (...)
(grouping),
!
(negation), ||
(or) and &&
(and).
For example:
LHS_1 == RHS_1 && !(LHS_2 == RHS_2 || LHS_3 == RHS_3)
Example:
$ aq_udb -exp Test -filt 't > 123456789'
$ aq_udb -exp Test -filt 'Eval($Random % 100) == 0'
-goto DestSpec
Go to DestSpec
. This is uaually done conditionally within a
-if/-elif/-endif
block (see Rule Execution Controls for details).
DestSpec
is the destination to go to. It is one of:
next_bucket
- Skip the current user bucket entirely.cw
The export/count/scan processing on this bucket will also be skipped.next_row
- Skip the current data row and start over on the next row.+Num
- Jump over Num -eval, -filt and -goto rules.
Num=0
means the next rule, Num=1
means skip over one rule, and so.This rule can also be used within a -pp group. In this case, these additional destinations are supported:
proc_bucket
- Terminate all -pp
processings (i.e.,
stop the current -pp
group and skip all pending -pp
groups)
and start the export/count/scan operation in the current user bucket.next_pp
- Stop the current -pp
group and start the next one.-mod ModSpec [ModSrc]
Specify a module to be loaded on the server side during an export/count/scan operation. A module contains one or more processing functions which are called in each user bucket according to the Data Processing Steps. Only one such module can be specified.
ModSpec
has the form ModName
or ModName("Arg1", "Arg2", ...)
where ModName
is the module name and Arg*
are module dependent
arguments. Note that the arguments must be string constants;
for this reason, they must be quoted according to the
string constant spec.
ModSrc
is an optional module source file. It can be:
.so
extension.Without ModSrc
, the server will look for a preinstalled module matching
ModName
. Standard modules:
roi("VecName.Count_Col", "TabName.Page_Col", "Page1[,AtrLst]", ...)
Module for ROI counting. ROI spec is given in the module arguments. There are 3 or more arguments:
VecName.Count_Col
- Column to save matched count to.
It must have type I
.TabName.Page_Col
- Column to get the match value from.
It must have type S
. Rows in the table must already be in the
desired ROI matching order (usually ascending time order).PageN[,AtrLst]
- One or more pages to match against the
TabName.Page_Col
value. Each page is given as a separate
module argument.
Optional AtrLst
is a comma separated list containing:ncas
- Do case insensitive match.seq
- Require that the page match occur immediately after the
previous match (i.e., no unmatch page in between).
Applicable on the second page and up only.Either exact or wildcard match can be done. Exact match will either match
the entire TabName.Page_Col
value or up to (but not including) a
‘?’ or ‘#’ character.
Wildcard match is done if Page
contains ‘*’ (matches any number of
bytes) and/or ‘?’ (matches any 1 byte).
Literal ‘,’, ‘:’, ‘*’, ‘?’ and ‘\’ in Page
must be ‘\’ escaped.
-pp TabName [-bvar ... -eval ... -filt ... -goto ... -end_of_scan ...] -endpp
-pp
groups one or more -bvar, -eval, -filt and/or -goto
actions together.
Each group performs pre-processing at the user bucket level before
data in the bucket is exported/counted/scanned.
See Data Processing Steps for details.
TabName
sets the target table/vector for the rules in the -pp
group.
It may refer to a table/vector or the user bucket itself.
To target a table/vector, specify its name.
To target the user bucket itself, specify a ”.” (a dot).
”.” is a pseudo vector containing a single read only “PKEY” column.
The -bvar rules in the group are always executed first. Then the list of -eval, -filt and -goto rules are executed in order. Rule executions can also be made conditional by adding “if-else” controls. See Rule Execution Controls for details.
-end_of_scan DestSpec
- a special rule that defines the
action to take after all the rows in the target table has been exhausted.
The default action is to start the next -pp
group.
Use DestSpec
to control the exact behavior:
next_bucket
- Skip the current user bucket entirely.
The export/count/scan processing on this bucket will also be skipped.proc_bucket
- Skip all pending -pp
groups
and start the export/count/scan operation in the current user bucket.next_pp
- Start the next -pp
group. This is the default behavior
at the end of a -pp
table scan.+Num
- Jump over Num -pp
groups. Num=0
is equivalent to
next_pp
,
Num=1
means skip over the next -pp
group as well, and so.This option is not position dependent - it can be specified anywhere
within a -pp
group.
-endpp
marks the end of a -pp
group.
Example:
$ aq_udb -exp Test1 -pp 'Test2' -goto proc_bucket -end_of_scan next_bucket
-goto
rule will be executed on the first row, causing
execution to jump to export processing; in this way, the end-of-scan
condition is not triggered. However, if Test2 is empty, -goto
is not executed and end-of-scan is triggered.$ aq_udb -exp Test -pp . -filt 'Eval($Random % 100) == 0' -endpp -filt 't > 123456789'
-endpp
is mandatory here to prevent misinterpretation of the
2nd -filt
.-sort[,AtrLst] [ColName ...] [-top Num]
-exp output post processing option.
When exporting a table/vector,
use ColName
to set the desired sort columns.
If no ColName
is given, the “PKEY” column is assumed.
The sort columns must be in the output columns.
When exporting the “PKEY” (bucket key) only, no ColName
is needed.
Sort is always done by the “PKEY”.
Optional AtrLst
is a comma separated list containing:
dec
- Sort in descending order. Default order is ascending.-top
limits the output to the top Num
records in the result.
Note: Sort should not be used if the output contains columns other than those from the target table/vector (e.g. other vector columns).
-o[,AtrLst] File
Export output option.
Set the output attributes and file.
If File
is a ‘-‘ (a single dash), data will be written to stdout.
Optional AtrLst
is described under Output File Attributes.
If this option is not used with an export, data is written to stdout.
Example:
$ aq_udb -exp Test ... -o,esc,noq -
-c ColName [ColName ...]
Select columns to output during an export.
To address columns other than those in the target table/vector, use the
VecName.ColName
format. For the Var vector, VecName
is optional
unless ColName
also exists in the target.
Example:
$ aq_udb -exp Test ... -c Test_Col1 ... Test_ColN Var_Col1 ... Var_ColN
-ord[,AtrLst] [DbName:]TabName [ColName ...]
Sort records in table TabName
within each bucket.
Optional DbName
sets the Udb spec file as in the -db option.
ColName
sets the desired sort columns.
If no ColName
is given, the “TKEY” column is assumed
(see udb.spec).
Optional AtrLst
is a comma separated list containing:
dec
- Sort in descending order. Default order is ascending.If TabName
is a ”.” (a dot), all tables with a “TKEY” will be sorted.
No ColName
is needed in this case.
-clr [DbName:]TabName
Remove/reset TabName
data in the database.
Optional DbName
sets the Udb spec file as in the -db option.
TabName
is “var”), the columns are reset
to 0/blank.If TabName
is a ”.” (a dot), all user buckets will be removed,
along with all tables/vectors in the buckets.
The Var vector will be reset as well.
-probe [DbName:]
Probe the servers and exit.
Optional DbName
sets the Udb spec file as in the -db option.
This is typically used to check if all the target servers are up and ready.
If successful, the program exits with status 0. Otherwise, the program exits with a non-zero status code along error messages printed to stderr. Applicable exit codes are:
Each output option can have a list of comma separated attributes:
notitle
- Suppress the column name label row from the output.
A label row is normally included by default.app
- When outputting to a file, append to it instead of overwriting.csv
- Output in CSV format. This is the default.sep=c
or sep=\xHH
- Output in ‘c’ (single byte) separated value
format. ‘xHH’ is a way to specify ‘c’ via its HEX value HH
.
Note that sep=,
is not the same as csv
because CSV is a more
advanced format.bin
- Output in aq_tool’s internal binary format.esc
- Use ‘\’ to escape the field separator, ‘”’ and ‘\’ (non binary).noq
- Do not quote string fields (CSV).fmt_g
- Use “%g” as print format for F
type columns. Only use this
to aid data inspection (e.g., during integrity check or debugging).If no output format attribute is given, CSV is assumed.
A string constant must be quoted between double or single quotes. With double quotes, special character sequences can be used to represent special characters. With single quotes, no special sequence is recognized; in other words, a single quote cannot occur between single quotes.
Character sequences recognized between double quotes are:
\\
- represents a literal backslash character.\"
- represents a literal double quote character.\b
- represents a literal backspace character.\f
- represents a literal form feed character.\n
- represents a literal new line character.\r
- represents a literal carriage return character.\t
- represents a literal horizontal tab character.\v
- represents a literal vertical tab character.\0
- represents a NULL character.\xHH
- represents a character whose HEX value is HH
.\<newline>
- represents a line continuation sequence; both the backslash
and the newline will be removed.Sequences that are not recognized will be kept as-is.
Two or more quoted strings can be used back to back to form a single string. For example,
'a "b" c'" d 'e' f" => a "b" c d 'e' f
-pp also supports conditional actions using the
-if[not]
, -elif[not]
, -else
and -endif
construction:
-if[not] RuleToCheck RuleToRun ... -elif[not] RuleToCheck RuleToRun ... -else RuleToRun ... -endif
Sypported RuleToCheck
are -eval and -filt.
Suppoeted RuleToRun
are -eval, -filt and -goto.
Example:
$ aq_udb -exp Test -pp Test -bvar v_seq 0 -if -filt 'flag == "yes"' -eval v_seq 'v_seq + 1' -eval c3 'v_seq' -else -eval c3 '0' -endif
For each export/count/scan operation, data is processed according to the commandline options in this way:
-pp
group:-pp
table. For each row in the table: