thoth.adviser.predictors package

Submodules

thoth.adviser.predictors.annealing module

Implementation of Adaptive Simulated Annealing (ASA) used to resolve software stacks.

class thoth.adviser.predictors.annealing.AdaptiveSimulatedAnnealing(*, keep_history: Optional[Any] = None, temperature_coefficient: float = 0.999, temperature_history: List[Tuple[Optional[float], Optional[bool], Optional[float], int]] = NOTHING, temperature: float = 0.0)[source]

Bases: thoth.adviser.predictor.Predictor

Implementation of adaptive simulated annealing looking for stacks based on the scoring function.

plot() matplotlib.figure.Figure[source]

Plot temperature history of adaptive simulated annealing.

pre_run() None[source]

Initialize before the actual annealing run.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Run adaptive simulated annealing on top of beam.

temperature_coefficient

thoth.adviser.predictors.hill_climbing module

Implementation of hill climbing in the state space.

class thoth.adviser.predictors.hill_climbing.HillClimbing(*, keep_history: Optional[Any] = None)[source]

Bases: thoth.adviser.predictor.Predictor

Implementation of hill climbing in the state space.

plot() matplotlib.figure.Figure[source]

Plot score of the highest rated stack during hill climbing.

pre_run() None[source]

Initialize before the actual hill climbing run.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Get top state from the beam for the next resolution round.

thoth.adviser.predictors.latest module

Implementation of predictor used for resolving latest stacks in the state space.

class thoth.adviser.predictors.latest.ApproximatingLatest(*, keep_history: Optional[Any] = None, prioritized_packages: List[str] = NOTHING)[source]

Bases: thoth.adviser.predictors.hill_climbing.HillClimbing

Implementation of predictor used for resolving latest stacks in the state space.

This predictor approximates resolution to the latest software stack. The resolution to the latest is approximated using continuous resolution with an optional randomness not to get stuck in a “trap” if resolution to all latest cannot be satisfied.

pre_run() None[source]

Initialize local variables before each predictor run per resolver.

prioritized_packages
run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Get last state expanded and expand first unresolved dependency.

set_reward_signal(state: thoth.adviser.state.State, package_tuple: Tuple[str, str, str], reward: float) None[source]

Set hop to True if we did not get resolve any stack with latest.

thoth.adviser.predictors.mcts module

Implementation of Monte-Carlo Tree Search (MCTS) based predictor with adaptive simulated annealing schedule.

class thoth.adviser.predictors.mcts.MCTS(*, keep_history: Optional[Any] = None, temperature_coefficient: float = 0.999, temperature_history: List[Tuple[Optional[float], Optional[bool], Optional[float], int]] = NOTHING, temperature: float = 0.0, step: int = 1, trace: bool = True)[source]

Bases: thoth.adviser.predictors.td.TemporalDifference

Implementation of Monte-Carlo Tree Search (MCTS) based predictor with adaptive simulated annealing schedule.

pre_run() None[source]

Initialize pre-running of this predictor.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Run MCTS with adaptive simulated annealing schedule.

set_reward_signal(state: thoth.adviser.state.State, _: Tuple[str, str, str], reward: float) None[source]

Note down reward signal of the last action performed.

thoth.adviser.predictors.package_combinations module

Implementation of a predictor used for generating combinations of packages faster.

class thoth.adviser.predictors.package_combinations.PackageCombinations(*, keep_history: Optional[Any] = None, package_combinations=NOTHING)[source]

Bases: thoth.adviser.predictor.Predictor

A predictor used for generating combinations of packages faster.

package_combinations
pre_run() None[source]

Check attributes set up.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Run the predictor.

thoth.adviser.predictors.random_walk module

Implementation of a Random Walk based dependency graph sampling predictor.

class thoth.adviser.predictors.random_walk.RandomWalk(*, keep_history: Optional[Any] = None, prioritized_packages: List[str] = NOTHING, prefer_recent: bool = False)[source]

Bases: thoth.adviser.predictor.Predictor

Implementation of a Random Walk based dependency graph sampling predictor.

plot() matplotlib.figure.Figure[source]

Plot score of the highest rated stack during sampling.

pre_run() None[source]

Initialize before the random walk run.

prefer_recent
prioritized_packages
run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Generate stacks using random walking.

thoth.adviser.predictors.sampling module

Implementation of a random sampling of the state space.

class thoth.adviser.predictors.sampling.Sampling(*, keep_history: Optional[Any] = None)[source]

Bases: thoth.adviser.predictor.Predictor

Implementation of a random sampling of the state space.

plot() matplotlib.figure.Figure[source]

Plot score of the highest rated stack during sampling.

pre_run() None[source]

Initialize before the sampling run.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Get random state and random unresolved dependency from the beam for the next resolution round.

thoth.adviser.predictors.td module

Implementation of Temporal Difference (TD) based predictor with adaptive simulated annealing schedule.

class thoth.adviser.predictors.td.TemporalDifference(*, keep_history: Optional[Any] = None, temperature_coefficient: float = 0.999, temperature_history: List[Tuple[Optional[float], Optional[bool], Optional[float], int]] = NOTHING, temperature: float = 0.0, step: int = 1, trace: bool = True)[source]

Bases: thoth.adviser.predictors.annealing.AdaptiveSimulatedAnnealing

Implementation of Temporal Difference (TD) based predictor with adaptive simulated annealing schedule.

post_run() None[source]

De-initialize resources used by this predictor.

pre_run() None[source]

Initialize pre-running of this predictor.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Run Temporal Difference (TD) with adaptive simulated annealing schedule.

set_reward_signal(state: thoth.adviser.state.State, package_tuple: Tuple[str, str, str], reward: float) None[source]

Note down reward signal of the last action performed.

step
trace

Module contents

Implementation of predictors used with resolver..

class thoth.adviser.predictors.AdaptiveSimulatedAnnealing(*, keep_history: Optional[Any] = None, temperature_coefficient: float = 0.999, temperature_history: List[Tuple[Optional[float], Optional[bool], Optional[float], int]] = NOTHING, temperature: float = 0.0)[source]

Bases: thoth.adviser.predictor.Predictor

Implementation of adaptive simulated annealing looking for stacks based on the scoring function.

plot() matplotlib.figure.Figure[source]

Plot temperature history of adaptive simulated annealing.

pre_run() None[source]

Initialize before the actual annealing run.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Run adaptive simulated annealing on top of beam.

temperature_coefficient
class thoth.adviser.predictors.ApproximatingLatest(*, keep_history: Optional[Any] = None, prioritized_packages: List[str] = NOTHING)[source]

Bases: thoth.adviser.predictors.hill_climbing.HillClimbing

Implementation of predictor used for resolving latest stacks in the state space.

This predictor approximates resolution to the latest software stack. The resolution to the latest is approximated using continuous resolution with an optional randomness not to get stuck in a “trap” if resolution to all latest cannot be satisfied.

pre_run() None[source]

Initialize local variables before each predictor run per resolver.

prioritized_packages
run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Get last state expanded and expand first unresolved dependency.

set_reward_signal(state: thoth.adviser.state.State, package_tuple: Tuple[str, str, str], reward: float) None[source]

Set hop to True if we did not get resolve any stack with latest.

class thoth.adviser.predictors.HillClimbing(*, keep_history: Optional[Any] = None)[source]

Bases: thoth.adviser.predictor.Predictor

Implementation of hill climbing in the state space.

plot() matplotlib.figure.Figure[source]

Plot score of the highest rated stack during hill climbing.

pre_run() None[source]

Initialize before the actual hill climbing run.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Get top state from the beam for the next resolution round.

class thoth.adviser.predictors.MCTS(*, keep_history: Optional[Any] = None, temperature_coefficient: float = 0.999, temperature_history: List[Tuple[Optional[float], Optional[bool], Optional[float], int]] = NOTHING, temperature: float = 0.0, step: int = 1, trace: bool = True)[source]

Bases: thoth.adviser.predictors.td.TemporalDifference

Implementation of Monte-Carlo Tree Search (MCTS) based predictor with adaptive simulated annealing schedule.

pre_run() None[source]

Initialize pre-running of this predictor.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Run MCTS with adaptive simulated annealing schedule.

set_reward_signal(state: thoth.adviser.state.State, _: Tuple[str, str, str], reward: float) None[source]

Note down reward signal of the last action performed.

class thoth.adviser.predictors.PackageCombinations(*, keep_history: Optional[Any] = None, package_combinations=NOTHING)[source]

Bases: thoth.adviser.predictor.Predictor

A predictor used for generating combinations of packages faster.

package_combinations
pre_run() None[source]

Check attributes set up.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Run the predictor.

class thoth.adviser.predictors.RandomWalk(*, keep_history: Optional[Any] = None, prioritized_packages: List[str] = NOTHING, prefer_recent: bool = False)[source]

Bases: thoth.adviser.predictor.Predictor

Implementation of a Random Walk based dependency graph sampling predictor.

plot() matplotlib.figure.Figure[source]

Plot score of the highest rated stack during sampling.

pre_run() None[source]

Initialize before the random walk run.

prefer_recent
prioritized_packages
run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Generate stacks using random walking.

class thoth.adviser.predictors.Sampling(*, keep_history: Optional[Any] = None)[source]

Bases: thoth.adviser.predictor.Predictor

Implementation of a random sampling of the state space.

plot() matplotlib.figure.Figure[source]

Plot score of the highest rated stack during sampling.

pre_run() None[source]

Initialize before the sampling run.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Get random state and random unresolved dependency from the beam for the next resolution round.

class thoth.adviser.predictors.TemporalDifference(*, keep_history: Optional[Any] = None, temperature_coefficient: float = 0.999, temperature_history: List[Tuple[Optional[float], Optional[bool], Optional[float], int]] = NOTHING, temperature: float = 0.0, step: int = 1, trace: bool = True)[source]

Bases: thoth.adviser.predictors.annealing.AdaptiveSimulatedAnnealing

Implementation of Temporal Difference (TD) based predictor with adaptive simulated annealing schedule.

post_run() None[source]

De-initialize resources used by this predictor.

pre_run() None[source]

Initialize pre-running of this predictor.

run() Tuple[thoth.adviser.state.State, Tuple[str, str, str]][source]

Run Temporal Difference (TD) with adaptive simulated annealing schedule.

set_reward_signal(state: thoth.adviser.state.State, package_tuple: Tuple[str, str, str], reward: float) None[source]

Note down reward signal of the last action performed.

step
trace