sugartensor package

Submodules

sugartensor.sg_activation module

sugartensor.sg_activation.sg_elu(x, opt)[source]
sugartensor.sg_activation.sg_leaky_relu(x, opt)[source]

“See [Xu, et al. 2015](https://arxiv.org/pdf/1505.00853v2.pdf)

Args:

x: A tensor opt:

name: A name for the operation (optional).
Returns:
A Tensor with the same type and shape as x.
sugartensor.sg_activation.sg_linear(x, opt)[source]
sugartensor.sg_activation.sg_relu(x, opt)[source]
sugartensor.sg_activation.sg_relu6(x, opt)[source]
sugartensor.sg_activation.sg_sigmoid(x, opt)[source]
sugartensor.sg_activation.sg_softmax(x, opt)[source]
sugartensor.sg_activation.sg_softplus(x, opt)[source]
sugartensor.sg_activation.sg_softsign(x, opt)[source]
sugartensor.sg_activation.sg_tanh(x, opt)[source]

sugartensor.sg_data module

class sugartensor.sg_data.Mnist(batch_size=128, reshape=False, one_hot=False)[source]

Bases: object

Downloads Mnist datasets and puts them in queues.

sugartensor.sg_initializer module

sugartensor.sg_initializer.constant(name, shape, value=0, dtype=tf.float32, summary=True, regularizer=None, trainable=True)[source]

Creates a tensor variable of which initial values are value and shape is shape.

Args:

name: The name of new variable. shape: A tuple/list of integers or an integer.

If shape is an integer, it is converted to a list.
value: A Python scalar. All elements of the initialized variable
will be set to this value. Default is 0.

dtype: The data type. Only floating point types are supported. Default is float32. summary: If True, add this constant to tensor board summary. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

trainable: If True, add this constant to trainable collection. Default is True.

Returns:
A Variable.
sugartensor.sg_initializer.external(name, value, dtype=tf.float32, summary=True, regularizer=None, trainable=True)[source]

Creates a tensor variable of which initial values are value.

For example,

` external("external", [3,3,1,2]) => [3. 3. 1. 2.] `

Args:

name: The name of new variable. value: A constant value (or list) of output type dtype. dtype: The type of the elements of the resulting tensor. summary: If True, add this constant to tensor board summary. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

trainable: If True, add this constant to trainable collection. Default is True.

Returns:
A Variable. Has the same contents as value of dtype.
sugartensor.sg_initializer.glorot_uniform(name, shape, scale=1, dtype=tf.float32, summary=True, regularizer=None, trainable=True)[source]

See [Glorot & Bengio. 2010.](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf)

Args:

name: The name of new variable shape: A tuple/list of integers. scale: A Python scalar. Scale to initialize. Default is 1. dtype: The data type. Default is float32. summary: If True, add this constant to tensor board summary. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

trainable: If True, add this constant to trainable collection. Default is True.

Returns:
A Variable.
sugartensor.sg_initializer.he_uniform(name, shape, scale=1, dtype=tf.float32, summary=True, regularizer=None, trainable=True)[source]

See [He et al. 2015](http://arxiv.org/pdf/1502.01852v1.pdf)

Args:

name: The name of new variable shape: A tuple/list of integers. scale: A Python scalar. Scale to initialize. Default is 1. dtype: The data type. Default is float32. summary: If True, add this constant to tensor board summary. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

trainable: If True, add this constant to trainable collection. Default is True.

Returns:
A Variable.
sugartensor.sg_initializer.identity(name, dim, scale=1, dtype=tf.float32, summary=True, regularizer=None, trainable=True)[source]

Creates a tensor variable of which initial values are of an identity matrix.

Note that the default value of scale (=0.05) is different from the min/max values (=0.0, 1.0) of tf.random_uniform_initializer.

For example,

``` identity(“identity”, 3, 2) => [[2. 0. 0.]

[0. 2. 0.] [0. 0. 2.]]

```

Args:

name: The name of new variable. dim: An int. The size of the first and second dimension of the output tensor. scale: A Python scalar. The value on the diagonal. dtype: The type of the elements of the resulting tensor. summary: If True, add this constant to tensor board summary. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

trainable: If True, add this constant to trainable collection. Default is True.

Returns:
A 2-D Variable.
sugartensor.sg_initializer.orthogonal(name, shape, scale=1.1, dtype=tf.float32, summary=True, regularizer=None, trainable=True)[source]

Creates a tensor variable of which initial values are of an orthogonal ndarray.

See [Saxe et al. 2014.](http://arxiv.org/pdf/1312.6120.pdf)

Args:

name: The name of new variable. shape: A tuple/list of integers. scale: A Python scalar. dtype: Either float32 or float64. summary: If True, add this constant to tensor board summary. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

trainable: If True, add this constant to trainable collection. Default is True.

Returns:
A Variable.
sugartensor.sg_initializer.uniform(name, shape, scale=0.05, dtype=tf.float32, summary=True, regularizer=None, trainable=True)[source]

Creates a tensor variable of which initial values are random numbers based on uniform distribution.

Note that the default value of scale (=0.05) is different from the min/max values (=0.0, 1.0) of tf.random_uniform_initializer.

Args:

name: The name of the new variable. shape: A tuple/list of integers or an integer.

If shape is an integer, it’s converted to a list.

scale: A Python scalar. All initial values should be in range [-scale, scale). Default is .05. dtype: The data type. Only floating point types are supported. Default is float32. summary: If True, add this constant to tensor board summary. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

trainable: If True, add this constant to trainable collection. Default is True.

Returns:
A Variable.

sugartensor.sg_layer module

sugartensor.sg_layer.sg_aconv(tensor, opt)[source]

Applies a 2-D atrous (or dilated) convolution.

Args:

tensor: A 4-D Tensor (automatically passed by decorator). opt:

size: A tuple/list of positive integers of length 2 representing [kernel height, kernel width].
Can be an integer if both values are the same. If not specified, (3, 3) is set automatically.
rate: A positive integer. The stride with which we sample input values across
the height and width dimensions. Default is 2.

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. pad: Either SAME (Default) or VALID. bias: Boolean. If True, biases are added. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

summary: If True, summaries are added. The default is True.

Returns:
A Tensor with the same type as tensor.
sugartensor.sg_layer.sg_aconv1d(tensor, opt)[source]

Applies 1-D atrous (or dilated) convolution.

Args:

tensor: A 3-D Tensor (automatically passed by decorator). opt:

causal: Boolean. If True, zeros are padded before the time axis such that
each activation unit doesn’t have receptive neurons beyond the equivalent time step.
size: A positive integer representing [kernel width]. As a default it is set to 2
if causal is True, 3 otherwise.
rate: A positive integer. The stride with which we sample input values across
the height and width dimensions. Default is 1.

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. pad: Either SAME (Default) or VALID. bias: Boolean. If True, biases are added. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

summary: If True, summaries are added. The default is True.

Returns:
A Tensor with the same type as tensor.
sugartensor.sg_layer.sg_bypass(tensor, opt)[source]

Returns the input tensor itself.

Args:

tensor: A Tensor (automatically passed by decorator). opt:

bn: Boolean. If True, batch normalization is applied. ln: Boolean. If True, layer normalization is applied. dout: A float of range [0, 100). A dropout rate. Default is 0. act: A name of activation function. e.g., sigmoid, tanh, etc.
Returns:
The same tensor as tensor.
sugartensor.sg_layer.sg_conv(tensor, opt)[source]

Applies a 2-D convolution.

Args:

tensor: A 4-D Tensor (automatically passed by decorator). opt:

size: A tuple/list of positive integers of length 2 representing [kernel height, kernel width].
Can be an integer if both values are the same. If not specified, (3, 3) is set implicitly.
stride: A tuple/list of positive integers of length 2 or 4 representing stride dimensions.
If the length is 2, i.e., (a, b), the stride is [1, a, b, 1]. If the length is 4, i.e., (a, b, c, d), the stride is [a, b, c, d]. Can be an integer. If the length is a, the stride is [1, a, a, 1]. Default value is [1, 1, 1, 1].

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. pad: Either SAME (Default) or VALID. bias: Boolean. If True, biases are added. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

summary: If True, summaries are added. The default is True.

Returns:
A Tensor with the same type as tensor.
sugartensor.sg_layer.sg_conv1d(tensor, opt)[source]

Applies a 1-D convolution.

Args:

tensor: A 3-D Tensor (automatically passed by decorator). opt:

size: A positive integer representing [kernel width].
If not specified, 2 is set implicitly.
stride: A positive integer. The number of entries by which
the filter is moved right at each step.

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. pad: Either SAME (Default) or VALID. bias: Boolean. If True, biases are added. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

summary: If True, summaries are added. The default is True.

Returns:
A Tensor with the same type as tensor.
sugartensor.sg_layer.sg_dense(tensor, opt)[source]

Applies a full connection.

Args:

tensor: A 2-D tensor (automatically passed by decorator). opt:

in_dim: An integer. The size of input dimension. dim: An integer. The size of output dimension. bias: Boolean. If True, biases are added. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

summary: If True, summaries are added. The default is True.

Returns:
A Tensor with the same type as tensor.
sugartensor.sg_layer.sg_emb(**kwargs)[source]

Returns a look-up table for embedding.

kwargs:

name: A name for the layer. emb: A 2-D array (optional).

If None, the resulting tensor should have the shape of [vocabulary size, embedding dimension size]. Note that its first row is filled with 0’s associated with padding.

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. voca_size: A positive integer. The size of vocabulary. summary: If True, summaries are added. The default is True.

Returns:
A 2-D Tensor of float32.
sugartensor.sg_layer.sg_espcn(tensor, opt)[source]
Applies a 2-D efficient sub pixel convolution.
(see [Shi et al. 2016](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Shi_Real-Time_Single_Image_CVPR_2016_paper.pdf)
Args:

tensor: A 4-D Tensor (automatically passed by decorator). opt:

size: A tuple/list of positive integers of length 2 representing [kernel height, kernel width].
Can be an integer if both values are the same. If not specified, (3, 3) is set implicitly.
stride: A tuple/list of positive integers of length 2 or 4 representing stride dimensions.
If the length is 2, i.e., (a, b), the stride is [1, a, b, 1]. If the length is 4, i.e., (a, b, c, d), the stride is [a, b, c, d]. Can be an integer. If the length is a, the stride is [1, a, a, 1]. Default value is [1, 1, 1, 1].

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. pad: Either SAME (Default) or VALID. bias: Boolean. If True, biases are added. factor: factor to multiply shape by. Default is 2. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

summary: If True, summaries are added. The default is True.

Returns:
A Tensor with the same type as tensor.
sugartensor.sg_layer.sg_gru(tensor, opt)[source]

Applies a GRU.

Args:

tensor: A 3-D Tensor (automatically passed by decorator). opt:

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. bias: Boolean. If True, biases are added. ln: Boolean. If True, layer normalization is applied. init_state: A 2-D Tensor. If None, the initial state is set to zeros. last_only: Boolean. If True, the outputs in the last time step are returned. mask: Boolean 2-D Tensor or None(default).

For false elements values are excluded from the calculation. As a result, the outputs for the locations become 0.

summary: If True, summaries are added. The default is True.

Returns:
A Tensor. If last_only is True, the output tensor has shape [batch size, dim]. Otherwise, [batch size, time steps, dim].
sugartensor.sg_layer.sg_lstm(tensor, opt)[source]

Applies an LSTM.

Args:

tensor: A 3-D Tensor (automatically passed by decorator). opt:

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. bias: Boolean. If True, biases are added. ln: Boolean. If True, layer normalization is applied. init_state: A 2-D Tensor. If None, the initial state is set to zeros. last_only: Boolean. If True, the outputs in the last time step are returned. mask: Boolean 2-D Tensor or None(default).

For false elements values are excluded from the calculation. As a result, the outputs for the locations become 0.

summary: If True, summaries are added. The default is True.

Returns:
A Tensor. If last_only is True, the output tensor has shape [batch size, dim]. Otherwise, [batch size, time steps, dim].
sugartensor.sg_layer.sg_rnn(tensor, opt)[source]

Applies a simple rnn.

Args:

tensor: A 3-D Tensor (automatically passed by decorator). opt:

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. bias: Boolean. If True, biases are added. ln: Boolean. If True, layer normalization is applied. init_state: A 2-D Tensor. If None, the initial state is set to zeros. last_only: Boolean. If True, the outputs in the last time step are returned. mask: Boolean 2-D Tensor or None(default).

For false elements values are excluded from the calculation. As a result, the outputs for the locations become 0.

summary: If True, summaries are added. The default is True.

Returns:
A Tensor. If last_only is True, the output tensor has shape [batch size, dim]. Otherwise, [batch size, time steps, dim].
sugartensor.sg_layer.sg_upconv(tensor, opt)[source]

Applies a up convolution (or convolution transpose).

Args:

tensor: A 4-D Tensor (automatically passed by decorator). opt:

size: A tuple/list of integers of length 2 representing [kernel height, kernel width].
Can be an integer if both values are the same. If not specified, (4, 4) is set implicitly.
stride: A tuple/list of integers of length 2 or 4 representing stride dimensions.
If the length is 2, i.e., (a, b), the stride is [1, a, b, 1]. If the length is 4, i.e., (a, b, c, d), the stride is [a, b, c, d]. Can be an integer. If the length is a, the stride is [1, a, a, 1]. Default value is [1, 2, 2, 1].

in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. pad: Either SAME (Default) or VALID. bias: Boolean. If True, biases are added. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

summary: If True, summaries are added. The default is True.

Returns:
A Tensor with the same type as tensor.
sugartensor.sg_layer.sg_upconv1d(tensor, opt)[source]

Applies 1-D a up convolution (or convolution transpose).

Args:

tensor: A 3-D Tensor (automatically passed by decorator). opt:

size: A positive integer representing [kernel width]. As a default it is set to 4 stride: A positive integer representing stride dimension. As a default it is set to 2 in_dim: A positive integer. The size of input dimension. dim: A positive integer. The size of output dimension. pad: Either SAME (Default) or VALID. bias: Boolean. If True, biases are added. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable

will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization

summary: If True, summaries are added. The default is True.

Returns:
A Tensor with the same type as tensor.

sugartensor.sg_logging module

sugartensor.sg_logging.sg_debug(msg, *args, **kwargs)[source]
sugartensor.sg_logging.sg_error(msg, *args, **kwargs)[source]
sugartensor.sg_logging.sg_fatal(msg, *args, **kwargs)[source]
sugartensor.sg_logging.sg_info(msg, *args, **kwargs)[source]
sugartensor.sg_logging.sg_summary_activation(tensor, prefix=None, name=None)[source]

Register tensor to summary report as activation

Args:
tensor: A Tensor to log as activation prefix: A string. A prefix to display in the tensor board web UI. name: A string. A name to display in the tensor board web UI.
Returns:
None
sugartensor.sg_logging.sg_summary_audio(tensor, sample_rate=16000, prefix=None, name=None)[source]

Register tensor to summary report as audio

Args:
tensor: A Tensor to log as audio sample_rate : An int. Sample rate to report. Default is 16000. prefix: A string. A prefix to display in the tensor board web UI. name: A string. A name to display in the tensor board web UI.
Returns:
None
sugartensor.sg_logging.sg_summary_gradient(tensor, gradient, prefix=None, name=None)[source]

Register tensor to summary report as gradient

Args:
tensor: A Tensor to log as gradient gradient: A 0-D Tensor. A gradient to log prefix: A string. A prefix to display in the tensor board web UI. name: A string. A name to display in the tensor board web UI.
Returns:
None
sugartensor.sg_logging.sg_summary_image(tensor, prefix=None, name=None)[source]

Register tensor to summary report as image

Args:
tensor: A tensor to log as image prefix: A string. A prefix to display in the tensor board web UI. name: A string. A name to display in the tensor board web UI.
Returns:
None
sugartensor.sg_logging.sg_summary_loss(tensor, prefix='losses', name=None)[source]

Register tensor to summary report as loss

Args:
tensor: A Tensor to log as loss prefix: A string. A prefix to display in the tensor board web UI. name: A string. A name to display in the tensor board web UI.
Returns:
None
sugartensor.sg_logging.sg_summary_metric(tensor, prefix='metrics', name=None)[source]

Register tensor to summary report as metric

Args:
tensor: A Tensor to log as metric prefix: A string. A prefix to display in the tensor board web UI. name: A string. A name to display in the tensor board web UI.
Returns:
None
sugartensor.sg_logging.sg_summary_param(tensor, prefix=None, name=None)[source]

Register tensor to summary report as parameters

Args:
tensor: A Tensor to log as parameters prefix: A string. A prefix to display in the tensor board web UI. name: A string. A name to display in the tensor board web UI.
Returns:
None
sugartensor.sg_logging.sg_verbosity(verbosity=0)[source]
sugartensor.sg_logging.sg_warn(msg, *args, **kwargs)[source]

sugartensor.sg_loss module

sugartensor.sg_loss.sg_bce(tensor, opt)[source]

Returns sigmoid cross entropy loss between tensor and target.

Args:

tensor: A Tensor. Logits. Unscaled log probabilities. opt:

target: A Tensor with the same shape and dtype as tensor. Labels. name: A string. A name to display in the tensor board web UI.
Returns:
A Tensor of the same shape as tensor

For example,

``` tensor = [[2, -1, 3], [3, 1, -2]] target = [[0, 1, 1], [1, 1, 0]] tensor.sg_bce(target=target) => [[ 2.12692809 1.31326163 0.04858733]

[ 0.04858733 0.31326166 0.12692805]]

```

sugartensor.sg_loss.sg_ce(tensor, opt)[source]

Returns softmax cross entropy loss between tensor and target.

Args:

tensor: A Tensor. Logits. Unscaled log probabilities. opt:

target: A Tensor with the same length in the first dimension as the tensor. Labels. one_hot: Boolean. Whether to treat the labels as one-hot encoding. Default is False. mask: Boolean. If True, zeros in the target will be excluded from the calculation. name: A string. A name to display in the tensor board web UI.
Returns:
A 1-D Tensor with the same shape as tensor.

For example,

` tensor = [[[2, -1, 3], [3, 1, -2]]] target = [[2, 1]] tensor.sg_ce(target=target) => [[ 0.32656264  2.13284516]] `

For example,

` tensor = [[2, -1, 3], [3, 1, -2]] target = [[0, 0, 1], [1, 0, 0]] tensor.sg_ce(target=target, one_hot=True) => [ 0.32656264  0.13284527] `

sugartensor.sg_loss.sg_ctc(tensor, opt)[source]

Computes the CTC (Connectionist Temporal Classification) Loss between tensor and target.

Args:

tensor: A 3-D float Tensor. opt:

target: A Tensor with the same length in the first dimension as the tensor. Labels. ( Dense tensor ) name: A string. A name to display in the tensor board web UI.
Returns:
A 1-D Tensor with the same length in the first dimension of the tensor.

For example,

` tensor = [[[2., -1., 3.], [3., 1., -2.]], [[1., -1., 2.], [3., 1., -2.]]] target = [[2., 1.], [2., 3.]] tensor.sg_ctc(target=target) => [ 4.45940781  2.43091154] `

sugartensor.sg_loss.sg_hinge(tensor, opt)[source]

Returns hinge loss between tensor and target.

Args:

tensor: A Tensor. opt:

target: A Tensor. Labels. margin: An int. Maximum margin. Default is 1. name: A string. A name to display in the tensor board web UI.
Returns:
A Tensor.

For example,

``` tensor = [[30, 10, 40], [13, 30, 42]] target = [[0, 0, 1], [0, 1, 0]] tensor.sg_hinge(target=target, one_hot=True) => [[ 1. 1. 0.]

[ 1. 0. 1.]]

```

sugartensor.sg_loss.sg_mae(tensor, opt)[source]

Returns absolute error between tensor and target.

Args:

tensor: A Tensor. opt:

target: A Tensor with the same shape and dtype as tensor. name: A string. A name to display in the tensor board web UI.
Returns:
A Tensor of the same shape and dtype as tensor

For example,

``` tensor = [[34, 11, 40], [13, 30, 42]] target = [[34, 10, 41], [14, 31, 40]] tensor.sg_mse(target=target) => [[ 0. 1. 1.]

[ 1. 1. 2.]]

```

sugartensor.sg_loss.sg_mse(tensor, opt)[source]

Returns squared error between tensor and target.

Args:

tensor: A Tensor. opt:

target: A Tensor with the same shape and dtype as tensor. name: A string. A name to display in the tensor board web UI.
Returns:
A Tensor of the same shape and dtype as tensor

For example,

``` tensor = [[34, 11, 40], [13, 30, 42]] target = [[34, 10, 41], [14, 31, 40]] tensor.sg_mse(target=target) => [[ 0. 1. 1.]

[ 1. 1. 4.]]

```

sugartensor.sg_main module

sugartensor.sg_main.sg_arg()[source]

Gets current command line options

Returns:
tf.sg_opt instance that is updated with current commandd line options.
sugartensor.sg_main.sg_arg_def(**kwargs)[source]

Defines command line options

Args:
**kwargs:
key: A name for the option. value : Default value or a tuple of (default value, description).
Returns:
None

For example,

``` # Either of the following two lines will define –n_epoch command line argument and set its default value as 1.

tf.sg_arg_def(n_epoch=1) tf.sg_arg_def(n_epoch=(1, ‘total number of epochs’)) ```

sugartensor.sg_main.sg_context(**kwargs)[source]

Context helper for computational graph building. Makes all elements within the with Block share the parameters.

For example, in the following example, the default value of parameter bn will be set to True in the all layers within the with block.

``` with tf.sg_context(bn=True):

```

Args:
**kwargs:

in_dim: An integer. The size of input dimension, which is set to the last one by default. dim: An integer. The size of output dimension. Has the same value as in_dim by default. bn: Boolean. If True, batch normalization is applied. ln: Boolean. If True, layer normalization is applied. dout: A float of range [0, 100). A dropout rate. Default is 0.. bias: Boolean. If True (Default), biases are added. name: A name for the layer. By default, the function name is assigned. act: A name of activation function. e.g., sigmoid, tanh, etc. reuse: True or None; if True, we go into reuse mode for this layer scope

as well as all sub-scopes; if None, we just inherit the parent scope reuse.
Returns:
None
sugartensor.sg_main.sg_get_context()[source]

Get current context information

Returns:
tf.sg_opt class object which contains all context information
sugartensor.sg_main.sg_global_step()[source]

Gets global step count

Returns:
A 0-D Tensor.
sugartensor.sg_main.sg_gpus()[source]

Gets current available GPU nums

Returns:
A integer : total # of GPUs available
sugartensor.sg_main.sg_inject(path, mod_name)[source]

Converts all functions in the given Python module to sugar functions so that they can be used in a chainable manner.

Args:
path: A string. Path to the Python module mod_name: A string. The name of the Python module to inject.
Returns:
None
sugartensor.sg_main.sg_inject_func(func)[source]

Converts the function func to a sugar function so that it can be used in a chainable manner.

Args:
func: A function to inject.
Returns:
None
sugartensor.sg_main.sg_input(shape=None, dtype=tf.float32, name=None)[source]

Creates a placeholder.

Args:
shape: A tuple/list of integers. If an integers is given, it will turn to a list. dtype: A data type. Default is float32. name: A name for the placeholder.
Returns:
A wrapped placeholder Tensor.
sugartensor.sg_main.sg_layer_func(func)[source]

Decorates a function func as a sg_layer function.

Args:
func: function to decorate
sugartensor.sg_main.sg_parallel(func)[source]

Decorates function as multiple gpu support towers. Args:

func: function to decorate
sugartensor.sg_main.sg_phase()[source]

Gets current training phase

Returns:
A boolean Tensor. If True, it is in the training phase, otherwise inference phase.
sugartensor.sg_main.sg_queue_context(sess=None)[source]

Context helper for queue routines.

Args:
sess: A session to open queues. If not specified, a new session is created.
Returns:
None
sugartensor.sg_main.sg_reuse(tensor, **opt)[source]

Reconstruct computational graph of tensor so all the parameters can be reused and replace its input tensor with opt.input.

Args:

tensor: A Tensor (automatically given by chaining). **opt:

input: A Tensor that will replace the original input tensor.
Returns:
Reconstructed tensor nodes.
sugartensor.sg_main.sg_rnn_layer_func(func)[source]

Decorates function as sg_rnn_layer functions. Args:

func: function to decorate
sugartensor.sg_main.sg_sugar_func(func)[source]

Decorates a function func so that it can be a sugar function. Sugar function can be used in a chainable manner.

Args:
func: function to decorate
Returns:
A sugar function.

sugartensor.sg_metric module

sugartensor.sg_metric.sg_accuracy(tensor, opt)[source]

Returns accuracy of predictions.

Args:

tensor: A Tensor. Probability distributions or unscaled prediction scores. opt:

target: A ‘Tensor`. Labels.
Returns:
A Tensor of the same shape as tensor. Each value will be 1 if correct else 0.

For example,

` tensor = [[20.1, 18, -4.2], [0.04, 21.1, 31.3]] target = [[0, 1]] tensor.sg_accuracy(target=target) => [[ 1.  0.]] `

sugartensor.sg_net module

sugartensor.sg_net.sg_densenet_121(x, opt)[source]

Applies dense net 121 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

k: An integer. The Growth rate of densenet. Default is 32. num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_densenet_161(x, opt)[source]

Applies dense net 161 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

k: An integer. The Growth rate of densenet. Default is 48. num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_densenet_169(x, opt)[source]

Applies dense net 169 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

k: An integer. The Growth rate of densenet. Default is 32. num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_densenet_201(x, opt)[source]

Applies dense net 201 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

k: An integer. The Growth rate of densenet. Default is 32. num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_densenet_layer(x, opt)[source]

Applies basic architecture of densenet layer.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

dim: An integer. Dimension for this resnet layer num: Number of times to repeat act: String. ‘relu’ (default). the activation function name trans: Boolean. If True(default), transition layer will be applied. reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String. (optional) Used as convolution layer prefix
Returns:
A Tensor.
sugartensor.sg_net.sg_resnet_101(x, opt)[source]

Applies residual net 101 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_resnet_152(x, opt)[source]

Applies residual net 152 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_resnet_200(x, opt)[source]

Applies residual net 200 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_resnet_50(x, opt)[source]

Applies residual net 50 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_resnet_layer(x, opt)[source]

Applies basic architecture of residual net.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

x: A Tensor. opt:

dim: An integer. Dimension for this resnet layer num: Number of times to repeat act: String. ‘relu’ (default). the activation function name reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String. (optional) Used as convolution layer prefix
Returns:
A Tensor.
sugartensor.sg_net.sg_vgg_16(tensor, opt)[source]

Applies vgg 16 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

tensor: A Tensor opt:

num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name bn: True or False(default). If True, batch normal will be applied reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.
sugartensor.sg_net.sg_vgg_19(tensor, opt)[source]

Applies vgg 19 model.

Note that the fc layers in the original architecture
will be replaced with fully convolutional layers. For convenience, We still call them fc layers, though.
Args:

tensor: A Tensor. opt:

num_class: An integer. Number of class. Default is 1000. conv_only: Boolean. If True, fc layers are not applied. Default is False. squeeze: Boolean. If True (default), the dimensions with size 1 in the final outputs will be removed. act: String. ‘relu’ (default). the activation function name bn: True or False(default). If True, batch normal will be applied reuse: Boolean(Optional). If True, all variables will be loaded from previous network. name: String(Optional). If provided, used as the scope name of this network
Returns:
A Tensor.

sugartensor.sg_optimize module

class sugartensor.sg_optimize.AdaMaxOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, use_locking=False, name='Adamax')[source]

Bases: tensorflow.python.training.optimizer.Optimizer

Optimizer that implements the Adamax algorithm. See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980) ([pdf](http://arxiv.org/pdf/1412.6980.pdf)).

excerpted from https://github.com/openai/iaf/blob/master/tf_utils/adamax.py

@@__init__

class sugartensor.sg_optimize.MaxPropOptimizer(learning_rate=0.001, beta2=0.999, use_locking=False, name='MaxProp')[source]

Bases: tensorflow.python.training.optimizer.Optimizer

Optimizer that implements the MaxProp algorithm by namju.kim@kakaocorp.com.

sugartensor.sg_queue module

sugartensor.sg_queue.sg_producer_func(func)[source]

Decorates a function func as sg_producer_func.

Args:
func: A function to decorate.

sugartensor.sg_train module

sugartensor.sg_train.sg_init(sess)[source]

Initializes session variables.

Args:
sess: Session to initialize.
sugartensor.sg_train.sg_optim(loss, **kwargs)[source]

Applies gradients to variables.

Args:

loss: A 0-D Tensor containing the value to minimize. list of 0-D tensor for Multiple GPU kwargs:

optim: A name for optimizer. ‘MaxProp’ (default), ‘AdaMax’, ‘Adam’, ‘RMSProp’ or ‘sgd’. lr: A Python Scalar (optional). Learning rate. Default is .001. beta1: A Python Scalar (optional). Default is .9. beta2: A Python Scalar (optional). Default is .99. momentum : A Python Scalar for RMSProp optimizer (optional). Default is 0. category: A string or string list. Specifies the variables that should be trained (optional).

Only if the name of a trainable variable starts with category, it’s value is updated. Default is ‘’, which means all trainable variables are updated.
sugartensor.sg_train.sg_print(tensor_list)[source]

Simple tensor printing function for debugging. Prints the value, shape, and data type of each tensor in the list.

Args:
tensor_list: A list/tuple of tensors or a single tensor.
Returns:
The value of the tensors.

For example,

`python import sugartensor as tf a = tf.constant([1.]) b = tf.constant([2.]) out = tf.sg_print([a, b]) # Should print [ 1.] (1,) float32 #              [ 2.] (1,) float32 print(out) # Should print [array([ 1.], dtype=float32), array([ 2.], dtype=float32)] `

sugartensor.sg_train.sg_regularizer_loss(scale=1.0)[source]

Get regularizer losss

Args:
scale: A scalar. A weight applied to regularizer loss
sugartensor.sg_train.sg_restore(sess, save_path, category='')[source]

Restores previously saved variables.

Args:
sess: A Session to use to restore the parameters. save_path: Path where parameters were previously saved. category: A String to filter variables starts with given category.

Returns:

sugartensor.sg_train.sg_train(**kwargs)[source]

Trains the model.

Args:
**kwargs:

optim: A name for optimizer. ‘MaxProp’ (default), ‘AdaMax’, ‘Adam’, ‘RMSProp’ or ‘sgd’. loss: A 0-D Tensor containing the value to minimize. lr: A Python Scalar (optional). Learning rate. Default is .001. beta1: A Python Scalar (optional). Default is .9. beta2: A Python Scalar (optional). Default is .99.

save_dir: A string. The root path to which checkpoint and log files are saved.
Default is asset/train.

max_ep: A positive integer. Maximum number of epochs. Default is 1000. ep_size: A positive integer. Number of Total batches in an epoch.

For proper display of log. Default is 1e5.
save_interval: A Python scalar. The interval of saving checkpoint files.
By default, for every 600 seconds, a checkpoint file is written.
log_interval: A Python scalar. The interval of recoding logs.
By default, for every 60 seconds, logging is executed.

max_keep: A positive integer. Maximum number of recent checkpoints to keep. Default is 5. keep_interval: A Python scalar. How often to keep checkpoints. Default is 1 hour.

category: Scope name or list to train

eval_metric: A list of tensors containing the value to evaluate. Default is [].

tqdm: Boolean. If True (Default), progress bars are shown. If False, a series of loss
will be shown on the console.
sugartensor.sg_train.sg_train_func(func)[source]

Decorates a function func as sg_train_func.

Args:
func: A function to decorate

sugartensor.sg_transform module

sugartensor.sg_transform.sg_all(tensor, opt)[source]

Computes the “logical and” of elements across axis of a tensor.

See tf.reduce_all() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis : A tuple/list of integers or an integer. The axis to reduce. keep_dims: If true, retains reduced dimensions with length 1. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_any(tensor, opt)[source]

Computes the “logical or” of elements across axis of a tensor.

See tf.reduce_any() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis : A tuple/list of integers or an integer. The axis to reduce. keep_dims: If true, retains reduced dimensions with length 1. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_argmax(tensor, opt)[source]

Returns the indices of the maximum values along the specified axis.

See tf.argmax() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis: Target axis. Default is the last one. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_argmin(tensor, opt)[source]

Returns the indices of the minimum values along the specified axis.

See tf.argin() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis: Target axis. Default is the last one. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_cast(tensor, opt)[source]

Casts a tensor to a new type.

See tf.cast() in tensorflow.

Args:

tensor: A Tensor or SparseTensor (automatically given by chain). opt:

dtype : The destination type. name : If provided, it replaces current tensor’s name
Returns:
A Tensor or SparseTensor with same shape as tensor.
sugartensor.sg_transform.sg_concat(tensor, opt)[source]

Concatenates tensors along a axis.

See tf.concat() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

target: A Tensor. Must have the same rank as tensor, and
all dimensions except opt.dim must be equal.

axis : Target axis. Default is the last one. name: If provided, replace current tensor’s name.

Returns:
A Tensor.
sugartensor.sg_transform.sg_exp(tensor, opt)[source]

Exponential transform a dense tensor

See tf.exp() in tensorflow.

Args:

tensor: A Tensor ( automatically given by chain ) opt:

name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_expand_dims(tensor, opt)[source]

Inserts a new axis.

See tf.expand_dims() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis : Dimension to expand. Default is -1. name: If provided, it replaces current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_flatten(tensor, opt)[source]

Reshapes a tensor to batch_size x -1.

See tf.reshape() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

name: If provided, it replaces current tensor’s name.
Returns:
A 2-D tensor.
sugartensor.sg_transform.sg_float(tensor, opt)[source]

Casts a tensor to floatx.

See tf.cast() in tensorflow.

Args:

tensor: A Tensor or SparseTensor (automatically given by chain). opt:

name : If provided, it replaces current tensor’s name
Returns:
A Tensor or SparseTensor with same shape as tensor.
sugartensor.sg_transform.sg_identity(tensor, opt)[source]

Returns the same tensor

Args:

tensor: A Tensor (automatically given by chain). opt:

name : If provided, it replaces current tensor’s name
Returns:
A Tensor. Has the same content as tensor.
sugartensor.sg_transform.sg_int(tensor, opt)[source]

Casts a tensor to intx.

See tf.cast() in tensorflow.

Args:

tensor: A Tensor or SparseTensor (automatically given by chain). opt:

name: If provided, it replaces current tensor’s name.
Returns:
A Tensor or SparseTensor with same shape as tensor.
sugartensor.sg_transform.sg_inverse_periodic_shuffle(tensor, opt)[source]
Inverse periodic shuffle transformation for SubPixel CNN.
(see [Shi et al. 2016](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Shi_Real-Time_Single_Image_CVPR_2016_paper.pdf)
Args:

tensor: A tensor (automatically given by chain). opt:

factor: factor to multiply shape by. Default is 2. name : If provided, it replaces current tensor’s name.
Returns:
A tensor
sugartensor.sg_transform.sg_log(tensor, opt)[source]

Log transform a dense tensor

See tf.log() in tensorflow.

Args:

tensor: A Tensor ( automatically given by chain ) opt:

name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_lookup(tensor, opt)[source]

Looks up the tensor, which is the embedding matrix.

Args:

tensor: A tensor ( automatically given by chain ) opt:

emb: A 2-D Tensor. An embedding matrix. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_max(tensor, opt)[source]

Computes the maximum of elements across axis of a tensor.

See tf.reduce_max() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis : A tuple/list of integers or an integer. The axis to reduce. keep_dims: If true, retains reduced dimensions with length 1. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_mean(tensor, opt)[source]

Computes the mean of elements across axis of a tensor.

See tf.reduce_mean() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis : A tuple/list of integers or an integer. The axis to reduce. keep_dims: If true, retains reduced dimensions with length 1. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_min(tensor, opt)[source]

Computes the minimum of elements across axis of a tensor.

See tf.reduce_min() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis : A tuple/list of integers or an integer. The axis to reduce. keep_dims: If true, retains reduced dimensions with length 1. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_one_hot(tensor, opt)[source]

Converts a tensor into a one-hot tensor.

See tf.one_hot() in tensorflow.

Args:

tensor: A Tensor ( automatically given by chain ) opt:

depth: The number of classes. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_periodic_shuffle(tensor, opt)[source]
Periodic shuffle transformation for SubPixel CNN.
(see [Shi et al. 2016](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Shi_Real-Time_Single_Image_CVPR_2016_paper.pdf)
Args:

tensor: A tensor (automatically given by chain). opt:

factor: factor to multiply shape by. Default is 2. name : If provided, it replaces current tensor’s name.
Returns:
A tensor
sugartensor.sg_transform.sg_pool(tensor, opt)[source]

Performs the 2-D pooling on the tensor. Mostly used with sg_conv().

Args:

tensor: A 4-D Tensor (automatically given by chain). opt:

size: A tuple or list of integers of length 2 representing [kernel height, kernel width].
Can be an int if both values are the same. If not specified, (2, 2) is set implicitly.
stride: A tuple or list of integers of length 2 or 4 representing stride dimensions.
If the length is 2, i.e., (a, b), the stride is [1, a, b, 1]. If the length is 4, i.e., (a, b, c, d), the stride is [a, b, c, d]. Can be an int. If the length is an int, i.e., a, the stride is [1, a, a, 1]. The default value is [1, 1, 1, 1].

avg: Boolean. If True, average pooling is applied. Otherwise, max pooling. name: If provided, replace current tensor’s name.

Returns:
A Tensor. The max pooled output tensor.
sugartensor.sg_transform.sg_pool1d(tensor, opt)[source]

Performs the 1-D pooling on the tensor.

Args:

tensor: A 3-D Tensor (automatically passed by decorator). opt:

size: A positive integer representing [kernel width].
Default is 2.
stride: A positive integer. The number of entries by which
the filter is moved right at each step. Default is 2.

avg: Boolean. If True, average pooling is applied. Otherwise, max pooling. name: If provided, replace current tensor’s name.

Returns:
A tensor
sugartensor.sg_transform.sg_prod(tensor, opt)[source]

Computes the product of elements across axis of a tensor.

See tf.reduce_prod() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis : A tuple/list of integers or an integer. The axis to reduce. keep_dims: If true, retains reduced dimensions with length 1. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_reshape(tensor, opt)[source]

Reshapes a tensor.

See tf.reshape() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

shape: A tuple/list of integers. The destination shape. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_reverse_seq(tensor, opt)[source]

Reverses variable length slices.

Before applying the pure tensorflow function tf.reverse_sequence,
this function calculates sequence lengths by counting non-zeros.

For example,

``` tensor = [[1, 2, 3, 0, 0], [4, 5, 0, 0, 0]] tensor.sg_reverse_seq() => [[3 2 1 0 0]

[5 4 0 0 0]]

```

Args:

tensor: A 2-D Tensor (automatically given by chain). opt:

axis: Axis to reverse. Default is 1. name : If provided, it replaces current tensor’s name.
Returns:
A Tensor with the same shape and type as tensor.
sugartensor.sg_transform.sg_squeeze(tensor, opt)[source]

Removes axis of size 1 from the shape of a tensor.

See tf.squeeze() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

axis : A tuple/list of integers or an integer.
axis to remove. Default is -1.

name: If provided, it replaces current tensor’s name.

Returns:
A Tensor.
sugartensor.sg_transform.sg_sum(tensor, opt)[source]

Computes the sum of elements across axis of a tensor.

See tf.reduce_sum() in tensorflow.

Args:

tensor: A Tensor with zero-padding (automatically given by chain). opt:

axis: A tuple/list of integers or an integer. The axis to reduce. keep_dims: If true, retains reduced dimensions with length 1. name: If provided, replace current tensor’s name.
Returns:
A Tensor.
sugartensor.sg_transform.sg_to_sparse(tensor, opt)[source]

Converts a dense tensor into a sparse tensor.

See tf.SparseTensor() in tensorflow.

Args:

tensor: A Tensor with zero-padding (automatically given by chain). opt:

name: If provided, replace current tensor’s name.
Returns:
A SparseTensor.
sugartensor.sg_transform.sg_transpose(tensor, opt)[source]

Permutes the dimensions according to opt.perm.

See tf.transpose() in tensorflow.

Args:

tensor: A Tensor (automatically given by chain). opt:

perm: A permutation of the dimensions of tensor. The target shape. name: If provided, replace current tensor’s name.
Returns:
A Tensor.

sugartensor.sg_util module

sugartensor.sg_util.sg_opt

alias of _Opt

Module contents