smooth.glm

Module: smooth.glm

Inheritance diagram for regreg.smooth.glm:

digraph inheritance3d7c4a3e62 { rankdir=LR; size="8.0, 12.0"; "problems.composite.composite" [URL="regreg.problems.composite.html#regreg.problems.composite.composite",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A generic way to specify a problem in composite form."]; "problems.composite.smooth" [URL="regreg.problems.composite.html#regreg.problems.composite.smooth",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A composite subclass that has 0 as "]; "problems.composite.composite" -> "problems.composite.smooth" [arrowsize=0.5,style="setlinewidth(0.5)"]; "regreg.smooth.smooth_atom" [URL="regreg.smooth.html#regreg.smooth.smooth_atom",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A class for representing a smooth function and its gradient"]; "problems.composite.smooth" -> "regreg.smooth.smooth_atom" [arrowsize=0.5,style="setlinewidth(0.5)"]; "smooth.glm.coxph" [URL="#regreg.smooth.glm.coxph",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top"]; "smooth.glm.glm" -> "smooth.glm.coxph" [arrowsize=0.5,style="setlinewidth(0.5)"]; "smooth.glm.gaussian_loglike" [URL="#regreg.smooth.glm.gaussian_loglike",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="The Gaussian loss for observations $y$:"]; "regreg.smooth.smooth_atom" -> "smooth.glm.gaussian_loglike" [arrowsize=0.5,style="setlinewidth(0.5)"]; "smooth.glm.glm" [URL="#regreg.smooth.glm.glm",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A general linear model, usually a log-likelihood"]; "regreg.smooth.smooth_atom" -> "smooth.glm.glm" [arrowsize=0.5,style="setlinewidth(0.5)"]; "smooth.glm.huber_loss" [URL="#regreg.smooth.glm.huber_loss",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top"]; "regreg.smooth.smooth_atom" -> "smooth.glm.huber_loss" [arrowsize=0.5,style="setlinewidth(0.5)"]; "smooth.glm.logistic_loglike" [URL="#regreg.smooth.glm.logistic_loglike",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A class for combining the logistic log-likelihood with a general seminorm"]; "regreg.smooth.smooth_atom" -> "smooth.glm.logistic_loglike" [arrowsize=0.5,style="setlinewidth(0.5)"]; "smooth.glm.multinomial_loglike" [URL="#regreg.smooth.glm.multinomial_loglike",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A class for baseline-category logistic regression for nominal responses (e.g. Agresti, pg 267)"]; "regreg.smooth.smooth_atom" -> "smooth.glm.multinomial_loglike" [arrowsize=0.5,style="setlinewidth(0.5)"]; "smooth.glm.poisson_loglike" [URL="#regreg.smooth.glm.poisson_loglike",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A class for combining the Poisson log-likelihood with a general seminorm"]; "regreg.smooth.smooth_atom" -> "smooth.glm.poisson_loglike" [arrowsize=0.5,style="setlinewidth(0.5)"]; }

Classes

coxph

class regreg.smooth.glm.coxph(X, times, status, coef=1.0, offset=None, quadratic=None, initial=None)

Bases: regreg.smooth.glm.glm

__init__(X, times, status, coef=1.0, offset=None, quadratic=None, initial=None)

Cox proportional hazard loss function.

Parameters

X : np.float(n,p)

Design matrix.

times : np.float(n)

Event times.

status : np.bool(n)

Censoring status.

classmethod affine(linear_operator, offset, coef=1, diag=False, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

apply_offset(x)

If self.offset is not None, return x-self.offset, else return x.

property conjugate
property data
classmethod gaussian(X, response, coef=1.0, offset=None, quadratic=None, initial=None)

Create a loss for a Gaussian regression model.

Parameters

X : [ndarray, regreg.affine.affine_transform]

Design matrix

Y : ndarray

Response vector.

offset : ndarray (optional)

Offset to be applied in parameter space before evaluating loss.

quadratic : regreg.identity_quadratic.identity_quadratic (optional)

Optional quadratic to be added to objective.

initial : ndarray

Initial guess at coefficients.

Returns

glm_obj : regreg.glm.glm

General linear model loss.

get_conjugate()
get_data()
get_lipschitz()
get_offset()
get_quadratic()

Get the quadratic part of the composite.

gradient(beta)

Compute the gradient of the loss :math:`

abla ell(Xeta)`.

Parameters

beta : ndarray

Parameters.

Returns

grad : ndarray

Gradient of the loss at \(eta\).

hessian(beta)

Compute the Hessian of the loss :math:`

abla^2 ell(Xeta)`.

Parameters

beta : ndarray

Parameters.

Returns

hess : ndarray

Hessian of the loss at \(eta\), if defined.

classmethod huber(X, response, smoothing_parameter, coef=1.0, offset=None, quadratic=None, initial=None)

Create a loss for a regression model using Huber loss.

Parameters

X : [ndarray, regreg.affine.affine_transform]

Design matrix

response : ndarray

Response vector.

smoothing_parameter : float

Smoothing parameter for Huber loss.

offset : ndarray (optional)

Offset to be applied in parameter space before evaluating loss.

quadratic : regreg.identity_quadratic.identity_quadratic (optional)

Optional quadratic to be added to objective.

initial : ndarray

Initial guess at coefficients.

Returns

glm_obj : regreg.glm.glm

General linear model loss.

latexify(var=None, idx='')
classmethod linear(linear_operator, coef=1, diag=False, offset=None, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

linear_predictor(beta)

Compute \(Xeta\).

Parameters

beta : ndarray

Parameters.

Returns

linpred : ndarray

property lipschitz
classmethod logistic(X, successes, trials=None, coef=1.0, offset=None, quadratic=None, initial=None)

Create a loss for a logistic regression model.

Parameters

X : [ndarray, regreg.affine.affine_transform]

Design matrix

successes : ndarray

Responses (should be non-negative integers).

trials : ndarray (optional)

Number of trials for each success. If None, defaults to np.ones_like(successes).

offset : ndarray (optional)

Offset to be applied in parameter space before evaluating loss.

quadratic : regreg.identity_quadratic.identity_quadratic (optional)

Optional quadratic to be added to objective.

initial : ndarray

Initial guess at coefficients.

Returns

glm_obj : regreg.glm.glm

General linear model loss.

nonsmooth_objective(x, check_feasibility=False)
objective(beta)

Compute the loss \(\ell(Xeta)\).

Parameters

beta : ndarray

Parameters.

Returns

objective : float

Value of the loss at \(eta\).

objective_template = '\\ell^{\\text{Cox}}\\left(%(var)s\\right)'
objective_vars = {'coef': 'C', 'offset': '\\alpha+', 'shape': 'p', 'var': '\\beta'}
property offset
classmethod poisson(X, counts, coef=1.0, offset=None, quadratic=None, initial=None)

Create a loss for a Poisson regression model.

Parameters

X : [ndarray, regreg.affine.affine_transform]

Design matrix

counts : ndarray

Response vector. Should be non-negative integers.

offset : ndarray (optional)

Offset to be applied in parameter space before evaluating loss.

quadratic : regreg.identity_quadratic.identity_quadratic (optional)

Optional quadratic to be added to objective.

initial : ndarray

Initial guess at coefficients.

Returns

glm_obj : regreg.glm.glm

General linear model loss.

proximal(quadratic)
proximal_optimum(quadratic)
proximal_step(quadratic, prox_control=None)

Compute the proximal optimization

Parameters

prox_control: [None, dict]

If not None, then a dictionary of parameters for the prox procedure

property quadratic

Quadratic part of the object, instance of regreg.identity_quadratic.identity_quadratic.

scale(obj, copy=False)
set_data(data)
set_lipschitz(value)
set_offset(value)
set_quadratic(quadratic)

Set the quadratic part of the composite.

classmethod shift(offset, coef=1, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

smooth_objective(beta, mode='both', check_feasibility=False)
Parameters

beta : ndarray

The current parameter values.

mode : str

One of [‘func’, ‘grad’, ‘both’].

check_feasibility : bool

If True, return np.inf when point is not feasible, i.e. when beta is not in the domain.

Returns

If mode is ‘func’ returns just the objective value

at beta, else if mode is ‘grad’ returns the gradient

else returns both.

smoothed(smoothing_quadratic)

Add quadratic smoothing term

solve(quadratic=None, return_optimum=False, **fit_args)
subsample(idx)

Create a loss using a subsample of the data. Makes a copy of the loss and multiplies case_weights by the indicator for idx.

Parameters

idx : index

Indices of np.arange(n) to keep.

Returns

subsample_loss : glm

Loss after discarding all cases not in `idx.

gaussian_loglike

class regreg.smooth.glm.gaussian_loglike(shape, response, coef=1.0, offset=None, quadratic=None, initial=None, case_weights=None)

Bases: regreg.smooth.smooth_atom

The Gaussian loss for observations \(y\):

\[\mu \mapsto \]

rac{1}{2} |y-mu|^2_2

__init__(shape, response, coef=1.0, offset=None, quadratic=None, initial=None, case_weights=None)

Initialize self. See help(type(self)) for accurate signature.

classmethod affine(linear_operator, offset, coef=1, diag=False, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

apply_offset(x)

If self.offset is not None, return x-self.offset, else return x.

property conjugate
property data
get_conjugate()
get_data()
get_lipschitz()
get_offset()
get_quadratic()

Get the quadratic part of the composite.

hessian(natural_param)

Hessian of the loss.

Parameters

natural_param : ndarray

Parameters where Hessian will be evaluated.

Returns

hess : ndarray

A 1D-array representing the diagonal of the Hessian evaluated at natural_param.

latexify(var=None, idx='')
classmethod linear(linear_operator, coef=1, diag=False, offset=None, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

property lipschitz
mean_function(eta)
nonsmooth_objective(x, check_feasibility=False)
objective(x, check_feasibility=False)
objective_template = '\\ell^{\\text{Gauss}}\\left(%(var)s\\right)'
objective_vars = {'coef': 'C', 'offset': '\\alpha+', 'shape': 'p', 'var': '\\beta'}
property offset
proximal(quadratic)
proximal_optimum(quadratic)
proximal_step(quadratic, prox_control=None)

Compute the proximal optimization

Parameters

prox_control: [None, dict]

If not None, then a dictionary of parameters for the prox procedure

property quadratic

Quadratic part of the object, instance of regreg.identity_quadratic.identity_quadratic.

scale(obj, copy=False)
set_data(data)
set_lipschitz(value)
set_offset(value)
set_quadratic(quadratic)

Set the quadratic part of the composite.

classmethod shift(offset, coef=1, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

smooth_objective(natural_param, mode='both', check_feasibility=False)

Evaluate the smooth objective, computing its value, gradient or both.

Parameters

natural_param : ndarray

The current parameter values.

mode : str

One of [‘func’, ‘grad’, ‘both’].

check_feasibility : bool

If True, return np.inf when point is not feasible, i.e. when natural_param is not in the domain.

Returns

If mode is ‘func’ returns just the objective value

at natural_param, else if mode is ‘grad’ returns the gradient

else returns both.

smoothed(smoothing_quadratic)

Add quadratic smoothing term

solve(quadratic=None, return_optimum=False, **fit_args)

glm

class regreg.smooth.glm.glm(X, Y, loss, quadratic=None, initial=None, offset=None)

Bases: regreg.smooth.smooth_atom

A general linear model, usually a log-likelihood for response \(Y\) whose mean is modelled through \(Xeta\).

Usual examples are Gaussian (least squares regression), logistic regression and Poisson log-linear regression.

Huber loss is also implemented as an example.

__init__(X, Y, loss, quadratic=None, initial=None, offset=None)
Parameters

X : ndarray

The design matrix.

Y : ndarray

The response.

loss : regreg.smooth.smooth_atom

The loss function that takes arguments the same size as Y. So, for Gaussian regression the loss is just the map \(\mu \mapsto \|\mu - Y\|^2_2/2\).

quadratic : regreg.identity_quadratic.identity_quadratic

Optional quadratic part added to objective.

initial : ndarray

An initial guess at the minimizer.

classmethod affine(linear_operator, offset, coef=1, diag=False, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

apply_offset(x)

If self.offset is not None, return x-self.offset, else return x.

property conjugate
property data

Data for the general linear model.

classmethod gaussian(X, response, coef=1.0, offset=None, quadratic=None, initial=None)

Create a loss for a Gaussian regression model.

Parameters

X : [ndarray, regreg.affine.affine_transform]

Design matrix

Y : ndarray

Response vector.

offset : ndarray (optional)

Offset to be applied in parameter space before evaluating loss.

quadratic : regreg.identity_quadratic.identity_quadratic (optional)

Optional quadratic to be added to objective.

initial : ndarray

Initial guess at coefficients.

Returns

glm_obj : regreg.glm.glm

General linear model loss.

get_conjugate()
get_data()
get_lipschitz()
get_offset()
get_quadratic()

Get the quadratic part of the composite.

gradient(beta)

Compute the gradient of the loss :math:`

abla ell(Xeta)`.

Parameters

beta : ndarray

Parameters.

Returns

grad : ndarray

Gradient of the loss at \(eta\).

hessian(beta)

Compute the Hessian of the loss :math:`

abla^2 ell(Xeta)`.

Parameters

beta : ndarray

Parameters.

Returns

hess : ndarray

Hessian of the loss at \(eta\), if defined.

classmethod huber(X, response, smoothing_parameter, coef=1.0, offset=None, quadratic=None, initial=None)

Create a loss for a regression model using Huber loss.

Parameters

X : [ndarray, regreg.affine.affine_transform]

Design matrix

response : ndarray

Response vector.

smoothing_parameter : float

Smoothing parameter for Huber loss.

offset : ndarray (optional)

Offset to be applied in parameter space before evaluating loss.

quadratic : regreg.identity_quadratic.identity_quadratic (optional)

Optional quadratic to be added to objective.

initial : ndarray

Initial guess at coefficients.

Returns

glm_obj : regreg.glm.glm

General linear model loss.

latexify(var=None, idx='')
classmethod linear(linear_operator, coef=1, diag=False, offset=None, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

linear_predictor(beta)

Compute \(Xeta\).

Parameters

beta : ndarray

Parameters.

Returns

linpred : ndarray

property lipschitz
classmethod logistic(X, successes, trials=None, coef=1.0, offset=None, quadratic=None, initial=None)

Create a loss for a logistic regression model.

Parameters

X : [ndarray, regreg.affine.affine_transform]

Design matrix

successes : ndarray

Responses (should be non-negative integers).

trials : ndarray (optional)

Number of trials for each success. If None, defaults to np.ones_like(successes).

offset : ndarray (optional)

Offset to be applied in parameter space before evaluating loss.

quadratic : regreg.identity_quadratic.identity_quadratic (optional)

Optional quadratic to be added to objective.

initial : ndarray

Initial guess at coefficients.

Returns

glm_obj : regreg.glm.glm

General linear model loss.

nonsmooth_objective(x, check_feasibility=False)
objective(beta)

Compute the loss \(\ell(Xeta)\).

Parameters

beta : ndarray

Parameters.

Returns

objective : float

Value of the loss at \(eta\).

objective_template = 'f(%(var)s)'
objective_vars = {'coef': 'C', 'offset': '\\alpha+', 'shape': 'p', 'var': '\\beta'}
property offset
classmethod poisson(X, counts, coef=1.0, offset=None, quadratic=None, initial=None)

Create a loss for a Poisson regression model.

Parameters

X : [ndarray, regreg.affine.affine_transform]

Design matrix

counts : ndarray

Response vector. Should be non-negative integers.

offset : ndarray (optional)

Offset to be applied in parameter space before evaluating loss.

quadratic : regreg.identity_quadratic.identity_quadratic (optional)

Optional quadratic to be added to objective.

initial : ndarray

Initial guess at coefficients.

Returns

glm_obj : regreg.glm.glm

General linear model loss.

proximal(quadratic)
proximal_optimum(quadratic)
proximal_step(quadratic, prox_control=None)

Compute the proximal optimization

Parameters

prox_control: [None, dict]

If not None, then a dictionary of parameters for the prox procedure

property quadratic

Quadratic part of the object, instance of regreg.identity_quadratic.identity_quadratic.

scale(obj, copy=False)
set_data(data)
set_lipschitz(value)
set_offset(value)
set_quadratic(quadratic)

Set the quadratic part of the composite.

classmethod shift(offset, coef=1, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

smooth_objective(beta, mode='func', check_feasibility=False)
Parameters

beta : ndarray

The current parameter values.

mode : str

One of [‘func’, ‘grad’, ‘both’].

check_feasibility : bool

If True, return np.inf when point is not feasible, i.e. when beta is not in the domain.

Returns

If mode is ‘func’ returns just the objective value

at beta, else if mode is ‘grad’ returns the gradient

else returns both.

smoothed(smoothing_quadratic)

Add quadratic smoothing term

solve(quadratic=None, return_optimum=False, **fit_args)
subsample(idx)

Create a loss using a subsample of the data. Makes a copy of the loss and multiplies case_weights by the indicator for idx.

Parameters

idx : index

Indices of np.arange(n) to keep.

Returns

subsample_loss : glm

Loss after discarding all cases not in `idx.

huber_loss

class regreg.smooth.glm.huber_loss(shape, response, smoothing_parameter, coef=1.0, offset=None, quadratic=None, initial=None, case_weights=None)

Bases: regreg.smooth.smooth_atom

__init__(shape, response, smoothing_parameter, coef=1.0, offset=None, quadratic=None, initial=None, case_weights=None)

Initialize self. See help(type(self)) for accurate signature.

classmethod affine(linear_operator, offset, coef=1, diag=False, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

apply_offset(x)

If self.offset is not None, return x-self.offset, else return x.

property conjugate
property data
get_conjugate()
get_data()
get_lipschitz()
get_offset()
get_quadratic()

Get the quadratic part of the composite.

hessian(param)

Hessian of the loss.

Parameters

param : ndarray

Parameters where Hessian will be evaluated.

Returns

hess : ndarray

A 1D-array representing the diagonal of the Hessian evaluated at natural_param.

latexify(var=None, idx='')
classmethod linear(linear_operator, coef=1, diag=False, offset=None, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

property lipschitz
nonsmooth_objective(x, check_feasibility=False)
objective(x, check_feasibility=False)
objective_template = '\\ell^{\\text{Huber}}\\left(%(var)s\\right)'
objective_vars = {'coef': 'C', 'offset': '\\alpha+', 'shape': 'p', 'var': '\\beta'}
property offset
proximal(quadratic)
proximal_optimum(quadratic)
proximal_step(quadratic, prox_control=None)

Compute the proximal optimization

Parameters

prox_control: [None, dict]

If not None, then a dictionary of parameters for the prox procedure

property quadratic

Quadratic part of the object, instance of regreg.identity_quadratic.identity_quadratic.

scale(obj, copy=False)
set_data(data)
set_lipschitz(value)
set_offset(value)
set_quadratic(quadratic)

Set the quadratic part of the composite.

classmethod shift(offset, coef=1, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

smooth_objective(param, mode='both', check_feasibility=False)

Evaluate the smooth objective, computing its value, gradient or both.

Parameters

param : ndarray

The current parameter values.

mode : str

One of [‘func’, ‘grad’, ‘both’].

check_feasibility : bool

If True, return np.inf when point is not feasible, i.e. when param is not in the domain.

Returns

If mode is ‘func’ returns just the objective value

at param, else if mode is ‘grad’ returns the gradient

else returns both.

smoothed(smoothing_quadratic)

Add quadratic smoothing term

solve(quadratic=None, return_optimum=False, **fit_args)

logistic_loglike

class regreg.smooth.glm.logistic_loglike(shape, successes, trials=None, coef=1.0, offset=None, quadratic=None, initial=None, case_weights=None)

Bases: regreg.smooth.smooth_atom

A class for combining the logistic log-likelihood with a general seminorm

__init__(shape, successes, trials=None, coef=1.0, offset=None, quadratic=None, initial=None, case_weights=None)

Initialize self. See help(type(self)) for accurate signature.

classmethod affine(linear_operator, offset, coef=1, diag=False, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

apply_offset(x)

If self.offset is not None, return x-self.offset, else return x.

property conjugate
property data
get_conjugate()
get_data()
get_lipschitz()
get_offset()
get_quadratic()

Get the quadratic part of the composite.

hessian(natural_param)

Hessian of the loss.

Parameters

natural_param : ndarray

Parameters where Hessian will be evaluated.

Returns

hess : ndarray

A 1D-array representing the diagonal of the Hessian evaluated at natural_param.

latexify(var=None, idx='')
classmethod linear(linear_operator, coef=1, diag=False, offset=None, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

property lipschitz
mean_function(eta, trials=None)
nonsmooth_objective(x, check_feasibility=False)
objective(x, check_feasibility=False)
objective_template = '\\ell^{\\text{logit}}\\left(%(var)s\\right)'
objective_vars = {'coef': 'C', 'offset': '\\alpha+', 'shape': 'p', 'var': '\\beta'}
property offset
proximal(quadratic)
proximal_optimum(quadratic)
proximal_step(quadratic, prox_control=None)

Compute the proximal optimization

Parameters

prox_control: [None, dict]

If not None, then a dictionary of parameters for the prox procedure

property quadratic

Quadratic part of the object, instance of regreg.identity_quadratic.identity_quadratic.

scale(obj, copy=False)
set_data(data)
set_lipschitz(value)
set_offset(value)
set_quadratic(quadratic)

Set the quadratic part of the composite.

classmethod shift(offset, coef=1, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

smooth_objective(natural_param, mode='both', check_feasibility=False)

Evaluate the smooth objective, computing its value, gradient or both.

Parameters

natural_param : ndarray

The current parameter values.

mode : str

One of [‘func’, ‘grad’, ‘both’].

check_feasibility : bool

If True, return np.inf when point is not feasible, i.e. when natural_param is not in the domain.

Returns

If mode is ‘func’ returns just the objective value

at natural_param, else if mode is ‘grad’ returns the gradient

else returns both.

smoothed(smoothing_quadratic)

Add quadratic smoothing term

solve(quadratic=None, return_optimum=False, **fit_args)

multinomial_loglike

class regreg.smooth.glm.multinomial_loglike(shape, counts, coef=1.0, offset=None, initial=None, quadratic=None)

Bases: regreg.smooth.smooth_atom

A class for baseline-category logistic regression for nominal responses (e.g. Agresti, pg 267)

__init__(shape, counts, coef=1.0, offset=None, initial=None, quadratic=None)

Initialize self. See help(type(self)) for accurate signature.

classmethod affine(linear_operator, offset, coef=1, diag=False, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

apply_offset(x)

If self.offset is not None, return x-self.offset, else return x.

property conjugate
get_conjugate()
get_lipschitz()
get_offset()
get_quadratic()

Get the quadratic part of the composite.

latexify(var=None, idx='')
classmethod linear(linear_operator, coef=1, diag=False, offset=None, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

property lipschitz
nonsmooth_objective(x, check_feasibility=False)
objective(x, check_feasibility=False)
objective_template = '\\ell^{M}\\left(%(var)s\\right)'
objective_vars = {'coef': 'C', 'offset': '\\alpha+', 'shape': 'p', 'var': '\\beta'}
property offset
proximal(quadratic)
proximal_optimum(quadratic)
proximal_step(quadratic, prox_control=None)

Compute the proximal optimization

Parameters

prox_control: [None, dict]

If not None, then a dictionary of parameters for the prox procedure

property quadratic

Quadratic part of the object, instance of regreg.identity_quadratic.identity_quadratic.

scale(obj, copy=False)
set_lipschitz(value)
set_offset(value)
set_quadratic(quadratic)

Set the quadratic part of the composite.

classmethod shift(offset, coef=1, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

smooth_objective(x, mode='both', check_feasibility=False)

Evaluate a smooth function and/or its gradient

if mode == ‘both’, return both function value and gradient if mode == ‘grad’, return only the gradient if mode == ‘func’, return only the function value

smoothed(smoothing_quadratic)

Add quadratic smoothing term

solve(quadratic=None, return_optimum=False, **fit_args)

poisson_loglike

class regreg.smooth.glm.poisson_loglike(shape, counts, coef=1.0, offset=None, quadratic=None, initial=None, case_weights=None)

Bases: regreg.smooth.smooth_atom

A class for combining the Poisson log-likelihood with a general seminorm

__init__(shape, counts, coef=1.0, offset=None, quadratic=None, initial=None, case_weights=None)

Initialize self. See help(type(self)) for accurate signature.

classmethod affine(linear_operator, offset, coef=1, diag=False, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

apply_offset(x)

If self.offset is not None, return x-self.offset, else return x.

property conjugate
property data
get_conjugate()
get_data()
get_lipschitz()
get_offset()
get_quadratic()

Get the quadratic part of the composite.

hessian(natural_param)

Hessian of the loss.

Parameters

natural_param : ndarray

Parameters where Hessian will be evaluated.

Returns

hess : ndarray

A 1D-array representing the diagonal of the Hessian evaluated at natural_param.

latexify(var=None, idx='')
classmethod linear(linear_operator, coef=1, diag=False, offset=None, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

property lipschitz
mean_function(eta)
nonsmooth_objective(x, check_feasibility=False)
objective(x, check_feasibility=False)
objective_template = '\\ell^{\\text{Pois}}\\left(%(var)s\\right)'
objective_vars = {'coef': 'C', 'offset': '\\alpha+', 'shape': 'p', 'var': '\\beta'}
property offset
proximal(quadratic)
proximal_optimum(quadratic)
proximal_step(quadratic, prox_control=None)

Compute the proximal optimization

Parameters

prox_control: [None, dict]

If not None, then a dictionary of parameters for the prox procedure

property quadratic

Quadratic part of the object, instance of regreg.identity_quadratic.identity_quadratic.

scale(obj, copy=False)
set_data(data)
set_lipschitz(value)
set_offset(value)
set_quadratic(quadratic)

Set the quadratic part of the composite.

classmethod shift(offset, coef=1, quadratic=None, **kws)

Keywords given in kws are passed to cls constructor along with other arguments

smooth_objective(natural_param, mode='both', check_feasibility=False)

Evaluate the smooth objective, computing its value, gradient or both.

Parameters

natural_param : ndarray

The current parameter values.

mode : str

One of [‘func’, ‘grad’, ‘both’].

check_feasibility : bool

If True, return np.inf when point is not feasible, i.e. when natural_param is not in the domain.

Returns

If mode is ‘func’ returns just the objective value

at natural_param, else if mode is ‘grad’ returns the gradient

else returns both.

smoothed(smoothing_quadratic)

Add quadratic smoothing term

solve(quadratic=None, return_optimum=False, **fit_args)

Function

regreg.smooth.glm.logistic_loss(X, Y, trials=None, coef=1.0)

Construct a logistic loss function for successes Y and affine transform X.

Parameters

X : [affine_transform, ndarray]

Design matrix

Y : ndarray