public class HardEM extends ExpectationMaximization
VotedPerceptron.IntermediateState
Modifier and Type | Field and Description |
---|---|
static boolean |
ADAGRAD_DEFAULT
Default value for ADAGRAD_KEY
|
static String |
ADAGRAD_KEY
Key for Boolean property that indicates whether to use AdaGrad subgradient
scaling, the adaptive subgradient algorithm of
John Duchi, Elad Hazan, Yoram Singer (JMLR 2010).
|
static String |
CONFIG_PREFIX
Prefix of property keys used by this class.
|
ITER_DEFAULT, ITER_KEY, iterations, latentVariableReasoner, RESET_SCHEDULE_DEFAULT, RESET_SCHEDULE_KEY, resetSchedule, STORE_WEIGHTS_DEFAULT, STORE_WEIGHTS_KEY, storedWeights, storeWeights, tolerance, TOLERANCE_DEFAULT, TOLERANCE_KEY
AUGMENT_LOSS_DEFAULT, AUGMENT_LOSS_KEY, augmentLoss, AVERAGE_STEPS_DEFAULT, AVERAGE_STEPS_KEY, averageSteps, expectedIncompatibility, L1_REGULARIZATION_DEFAULT, L1_REGULARIZATION_KEY, l1Regularization, L2_REGULARIZATION_DEFAULT, L2_REGULARIZATION_KEY, l2Regularization, NONNEGATIVE_WEIGHTS_DEFAULT, NONNEGATIVE_WEIGHTS_KEY, nonnegativeWeights, NUM_STEPS_DEFAULT, NUM_STEPS_KEY, numGroundings, numSteps, SCALE_GRADIENT_DEFAULT, SCALE_GRADIENT_KEY, scaleGradient, scheduleStepSize, STEP_SCHEDULE_DEFAULT, STEP_SCHEDULE_KEY, STEP_SIZE_DEFAULT, STEP_SIZE_KEY, stepSize, toStop, truthIncompatibility
config, immutableKernels, kernels, model, observedDB, reasoner, REASONER_DEFAULT, REASONER_KEY, rvDB, trainingMap
Constructor and Description |
---|
HardEM(Model model,
Database rvDB,
Database observedDB,
ConfigBundle config) |
Modifier and Type | Method and Description |
---|---|
protected double[] |
computeExpectedIncomp()
Computes the expected (unweighted) total incompatibility of the
GroundCompatibilityKernels in reasoner
for each CompatibilityKernel . |
protected double |
computeLoss()
Internal method for computing the loss at the current point
before taking a step.
|
protected double[] |
computeObservedIncomp() |
protected double[] |
computeScalingFactor()
Computes the amount to scale gradient for each rule
Scales by the number of groundings of each rule
unless the rule is not grounded in the training set, in which case
scales by 1.0
|
protected void |
doLearn() |
protected void |
minimizeKLDivergence()
Minimizes the KL divergence by setting the latent variables to their
most probable state conditioned on the evidence and the labeled
random variables.
|
close, getStepSize, getStoredWeights, inferLatentVariables, initGroundModel
addLossAugmentedKernels, computeRegularizer, getLoss, removeLossAugmentedKernels, stop
cleanUpGroundModel, learn, setLabeledRandomVariables
addObserver, clearChanged, countObservers, deleteObserver, deleteObservers, hasChanged, notifyObservers, notifyObservers, setChanged
public static final String CONFIG_PREFIX
ConfigManager
,
Constant Field Valuespublic static final String ADAGRAD_KEY
public static final boolean ADAGRAD_DEFAULT
public HardEM(Model model, Database rvDB, Database observedDB, ConfigBundle config)
protected void minimizeKLDivergence()
This method assumes that the inferred truth values will be used
immediately by VotedPerceptron.computeObservedIncomp()
.
minimizeKLDivergence
in class ExpectationMaximization
protected double[] computeExpectedIncomp()
VotedPerceptron
GroundCompatibilityKernels
in reasoner
for each CompatibilityKernel
.computeExpectedIncomp
in class VotedPerceptron
protected double[] computeObservedIncomp()
computeObservedIncomp
in class VotedPerceptron
protected double computeLoss()
VotedPerceptron
computeLoss
in class VotedPerceptron
protected void doLearn()
doLearn
in class ExpectationMaximization
protected double[] computeScalingFactor()
VotedPerceptron
computeScalingFactor
in class VotedPerceptron