helicon.optimize
Optimization: analytical screening, constraints, Gaussian process surrogate, Bayesian optimization, Pareto fronts.
Analytical Pre-Screening
helicon.optimize.analytical.screen_geometry(coils, *, z_min, z_max, n_pts=200, gamma=5.0 / 3.0, backend='auto')
Run the full Tier 1 analytical screening for a coil geometry.
Computes mirror ratio from Biot-Savart, then derives all analytical performance metrics in < 100 ms.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
coils
|
list of Coil
|
Coil definitions. |
required |
z_min
|
float
|
Axial domain [m]. |
required |
z_max
|
float
|
Axial domain [m]. |
required |
n_pts
|
int
|
On-axis evaluation resolution. |
200
|
gamma
|
float
|
Electron polytropic index. |
5.0 / 3.0
|
backend
|
str
|
Biot-Savart backend. |
'auto'
|
Returns:
| Type | Description |
|---|---|
NozzleScreeningResult
|
|
Source code in src/helicon/optimize/analytical.py
helicon.optimize.analytical.NozzleScreeningResult(mirror_ratio, thrust_coefficient, divergence_half_angle_deg, thrust_efficiency)
dataclass
Fast analytical screening metrics for a coil configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
mirror_ratio |
float
|
R_B = B_throat / B_exit. Should be >> 1 for good confinement. |
thrust_coefficient |
float
|
C_T = F / (ṁ c_s). Dimensionless nozzle performance metric. |
divergence_half_angle_deg |
float
|
Plume half-angle estimate [°]. Lower is better. |
thrust_efficiency |
float
|
η_T = 1 − 1/√R_B. Fraction of thermal energy converted to thrust. |
Constraints
helicon.optimize.constraints.CoilConstraints(max_total_mass_kg=None, max_total_power_W=None, max_B_conductor_T=None, current_density_Am2=10000000.0, conductor_resistivity_Ohm_m=1.72e-08, conductor_density_kg_m3=8960.0)
dataclass
Engineering constraint specification for coil optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_total_mass_kg
|
float or None
|
Maximum total mass of all coils combined [kg]. None = unconstrained. |
None
|
max_total_power_W
|
float or None
|
Maximum total resistive power dissipation [W]. None = unconstrained. For superconducting coils set this to None (zero resistive loss). |
None
|
max_B_conductor_T
|
float or None
|
Maximum peak field at the conductor surface [T]. REBCO tapes have a practical limit of ~15-20 T; copper coils are limited by structural / Lorentz force considerations. None = unconstrained. |
None
|
current_density_Am2
|
float
|
Maximum current density for conductor sizing [A/m²]. Typical values: 1-10 MA/m² for copper, 100-500 MA/m² for REBCO. |
10000000.0
|
conductor_resistivity_Ohm_m
|
float
|
Electrical resistivity of conductor [Ω·m]. Copper at 20 °C: 1.72e-8 Ω·m. Set to 0.0 for superconducting (zero resistive loss). |
1.72e-08
|
conductor_density_kg_m3
|
float
|
Mass density of conductor [kg/m³]. Copper: 8960 kg/m³. |
8960.0
|
helicon.optimize.constraints.evaluate_constraints(coil_params, constraints, *, penalty_factor=1000.0)
Evaluate engineering constraints for a set of coil parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
coil_params
|
(array, shape(N_coils, 3))
|
Coil parameters with columns |
required |
constraints
|
CoilConstraints
|
Active constraint specification. |
required |
penalty_factor
|
float
|
Scaling factor for quadratic penalty (used by optimizer). |
1000.0
|
Returns:
| Type | Description |
|---|---|
CoilConstraintResult
|
|
Source code in src/helicon/optimize/constraints.py
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | |
helicon.optimize.constraints.make_constrained_objective(objective_fn, constraints, *, penalty_factor=1000.0)
Wrap an MLX objective with a differentiable constraint penalty.
The returned function has the same signature as objective_fn but
adds a quadratic penalty for each violated constraint:
f_constrained(x) = objective_fn(x) + Σ penalty_factor * max(g_i(x), 0)²
All operations use MLX so the result is differentiable via mlx.core.grad.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
objective_fn
|
callable
|
MLX-differentiable objective; takes |
required |
constraints
|
CoilConstraints
|
Active constraints. |
required |
penalty_factor
|
float
|
Quadratic penalty weight. |
1000.0
|
Returns:
| Type | Description |
|---|---|
callable
|
Penalized objective suitable for |
Examples:
::
bounds = constraints.CoilConstraints(max_total_mass_kg=50.0, max_B_conductor_T=15.0)
penalized = make_constrained_objective(throat_ratio_objective, bounds)
result = optimize_coils_mlx(init_params, penalized)
Source code in src/helicon/optimize/constraints.py
202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | |
GP Surrogate & Bayesian Optimization
helicon.optimize.surrogate.GPSurrogate(normalize_y=True, n_restarts=5)
Gaussian Process surrogate model backed by scikit-learn.
Uses a Matérn 5/2 kernel with automatic hyperparameter optimization. Provides Expected Improvement acquisition function for Bayesian optimization of noisy, expensive-to-evaluate objectives.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
normalize_y
|
bool
|
Normalize target values before fitting (recommended). |
True
|
n_restarts
|
int
|
Number of hyperparameter optimization restarts. |
5
|
Source code in src/helicon/optimize/surrogate.py
Functions
expected_improvement(X, y_best, xi=0.01)
Expected improvement acquisition function (for maximization).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(array, shape(n_points, n_params))
|
|
required |
y_best
|
float
|
Best observed objective value so far. |
required |
xi
|
float
|
Exploration-exploitation trade-off (larger → more exploration). |
0.01
|
Returns:
| Name | Type | Description |
|---|---|---|
EI |
(array, shape(n_points))
|
Non-negative expected improvement at each candidate point. |
Source code in src/helicon/optimize/surrogate.py
fit(X, y)
Fit the GP to observations.
Inputs are normalized to zero mean and unit variance so the GP length-scale hyperparameter operates in a consistent numerical range regardless of the physical units of the parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(array, shape(n_samples, n_params))
|
|
required |
y
|
(array, shape(n_samples))
|
|
required |
Source code in src/helicon/optimize/surrogate.py
predict(X)
Predict mean and std at new points.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(array, shape(n_points, n_params))
|
|
required |
Returns:
| Type | Description |
|---|---|
SurrogateResult with ``mean`` and ``std`` arrays.
|
|
Source code in src/helicon/optimize/surrogate.py
helicon.optimize.surrogate.BayesianOptimizer(bounds, n_init=5, seed=0)
Sequential model-based optimizer using a GP surrogate.
Uses Expected Improvement as the acquisition function and maximizes it over a dense random candidate grid (no extra solver dependencies).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bounds
|
(array - like, shape(n_params, 2))
|
|
required |
n_init
|
int
|
Number of random evaluations before switching to GP-guided search. |
5
|
seed
|
int
|
Random seed for initial sampling and acquisition maximization. |
0
|
Source code in src/helicon/optimize/surrogate.py
Attributes
n_evaluated
property
Number of observations recorded so far.
Functions
ask(n=1)
Suggest the next point(s) to evaluate.
Returns random points until n_init observations are available,
then maximizes Expected Improvement over a random candidate grid.
Returns:
| Name | Type | Description |
|---|---|---|
X |
(array, shape(n, n_params))
|
|
Source code in src/helicon/optimize/surrogate.py
best()
Return the best observed point and its objective value.
Returns:
| Name | Type | Description |
|---|---|---|
x_best |
(array, shape(n_params))
|
|
y_best |
float
|
|
Source code in src/helicon/optimize/surrogate.py
tell(X, y)
Record new observations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(array, shape(n, n_params) or (n_params,))
|
|
required |
y
|
(array, shape(n) or scalar)
|
|
required |
Source code in src/helicon/optimize/surrogate.py
Gradient-Based Optimization (MLX)
helicon.optimize.gradient.GradientOptimizerConfig(n_steps=200, learning_rate=0.001, beta1=0.9, beta2=0.999, eps_adam=1e-08, tol=1e-06, history_every=10, n_phi=64)
dataclass
Hyper-parameters for :class:GradientOptimizer.
Attributes:
| Name | Type | Description |
|---|---|---|
n_steps |
int
|
Maximum number of gradient steps. Default: 200. |
learning_rate |
float
|
Adam base learning rate. Default: 1e-3. |
beta1 |
float
|
Adam first-moment decay. Default: 0.9. |
beta2 |
float
|
Adam second-moment decay. Default: 0.999. |
eps_adam |
float
|
Adam numerical stability term. Default: 1e-8. |
tol |
float
|
Convergence tolerance on gradient L2 norm. Optimization stops
early when |
history_every |
int
|
Record |
n_phi |
int
|
Azimuthal quadrature points for the MLX Biot-Savart backend. Default: 64. |
helicon.optimize.gradient.GradientResult(coil_params_history, objective_history, final_coil_params, n_steps_run, converged)
dataclass
Result from a :class:GradientOptimizer run.
Attributes:
| Name | Type | Description |
|---|---|---|
coil_params_history |
list of np.ndarray
|
Parameter snapshots recorded every |
objective_history |
list of float
|
Scalar objective value at each gradient step. |
final_coil_params |
ndarray
|
Optimized parameters at the last completed step. |
n_steps_run |
int
|
Actual number of steps executed (may be less than |
converged |
bool
|
True if gradient norm dropped below |
helicon.optimize.gradient.GradientOptimizer(grid, objective_fn, config=None)
Gradient descent optimizer backed by mlx.core.grad.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
grid
|
Grid
|
Axisymmetric computation grid — passed through to the objective
function during :meth: |
required |
objective_fn
|
callable
|
|
required |
config
|
GradientOptimizerConfig
|
Optimizer hyper-parameters. Defaults to
:class: |
None
|
Notes
The optimizer performs gradient descent (minimisation). If you want
to maximise a metric, negate it inside objective_fn.
Source code in src/helicon/optimize/gradient.py
Functions
run(initial_params)
Execute the optimization loop.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
initial_params
|
ndarray
|
Starting coil parameters. Typically shape |
required |
Returns:
| Type | Description |
|---|---|
GradientResult
|
|
Source code in src/helicon/optimize/gradient.py
helicon.optimize.gradient.optimize_mirror_ratio(base_coil_params, grid, *, target_mirror_ratio=5.0, n_steps=200, learning_rate=0.001, backend='auto')
Optimize coil currents to maximize mirror ratio via gradient descent.
Builds an objective_fn that calls
:func:~helicon.fields.biot_savart.compute_bfield_mlx_differentiable
and returns the negative mirror ratio (so minimising = maximising
R_B = B_max / B_exit).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_coil_params
|
(ndarray, shape(N_coils, 3))
|
Initial coil parameters |
required |
grid
|
Grid
|
Axisymmetric computation grid used to evaluate the field. |
required |
target_mirror_ratio
|
float
|
Currently unused — reserved for future penalised objectives. The optimization maximises R_B unconditionally. |
5.0
|
n_steps
|
int
|
Maximum gradient steps. Default: 200. |
200
|
learning_rate
|
float
|
Adam learning rate. Default: 1e-3. |
0.001
|
backend
|
str
|
Backend selector — |
'auto'
|
Returns:
| Type | Description |
|---|---|
GradientResult
|
|
Raises:
| Type | Description |
|---|---|
ImportError
|
When MLX is not available or an incompatible backend is requested. |
Source code in src/helicon/optimize/gradient.py
235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 | |
Pareto Front
helicon.optimize.pareto.ParetoResult(front_mask, front_indices, costs)
dataclass
Pareto front from a multi-objective evaluation.
Attributes:
| Name | Type | Description |
|---|---|---|
front_mask |
np.ndarray of bool, shape (n_points,)
|
True for Pareto-optimal (non-dominated) points. |
front_indices |
np.ndarray of int
|
Indices of Pareto-optimal points in the original array. |
costs |
(ndarray, shape(n_points, n_objectives))
|
Original cost matrix (minimization convention). |
Attributes
front_costs
property
Cost values for Pareto-optimal points only.
Functions
plot(*, labels=None, ax=None, figsize=(6, 5), dominated_color='lightgray', front_color='steelblue')
Plot the 2-objective Pareto front (minimization convention).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
labels
|
tuple of str
|
Axis labels |
None
|
ax
|
matplotlib Axes
|
Axes to draw on. Creates new figure if None. |
None
|
figsize
|
tuple
|
Figure size when creating a new figure. |
(6, 5)
|
dominated_color
|
str
|
Color for dominated (non-Pareto) points. |
'lightgray'
|
front_color
|
str
|
Color for Pareto-optimal points. |
'steelblue'
|
Returns:
| Type | Description |
|---|---|
(fig, ax)
|
|
Source code in src/helicon/optimize/pareto.py
helicon.optimize.pareto.pareto_front(costs)
Compute the Pareto front from a set of objective vectors.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
costs
|
(array, shape(n_points, n_objectives))
|
Objective values in minimization convention. To maximize an objective, pass its negation. |
required |
Returns:
| Type | Description |
|---|---|
ParetoResult
|
Contains |
Examples:
Maximize thrust (F) and detachment efficiency (η_d) simultaneously::
costs = np.column_stack([-thrust_values, -eta_d_values])
result = pareto_front(costs)
best_configs = scan_points[result.front_indices]
Source code in src/helicon/optimize/pareto.py
helicon.optimize.pareto.is_dominated(costs)
Return a boolean mask: True if a point is dominated by any other.
A point i is dominated by point j if j is at least as good in all objectives and strictly better in at least one.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
costs
|
(array, shape(n_points, n_objectives))
|
Objective values. Minimization convention. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
dominated |
bool array, shape (n_points,)
|
|
Source code in src/helicon/optimize/pareto.py
Parameter Scans
helicon.optimize.scan.ParameterRange(path, low, high, n)
dataclass
One parameter axis in a scan.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Dot-notation path into the config dict, e.g. |
required |
low
|
float
|
Inclusive range boundaries. |
required |
high
|
float
|
Inclusive range boundaries. |
required |
n
|
int
|
Number of points along this axis. |
required |
Functions
from_string(s)
classmethod
Parse "path:low:high:n" format used by the CLI.
Source code in src/helicon/optimize/scan.py
helicon.optimize.scan.ScanResult(points, metrics, param_names, base_config, n_screened=0, objectives=None)
dataclass
Results from a completed parameter scan.
Functions
plot_pareto(*, x_key='thrust_N', y_key=None, ax=None)
Plot the Pareto front for this scan result.
Extracts metric values for x_key and y_key from
:attr:metrics, builds a cost matrix (negated for maximization),
computes the Pareto front and calls :meth:ParetoResult.plot.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x_key
|
str
|
Metric key to use for the x-axis (default |
'thrust_N'
|
y_key
|
str
|
Metric key to use for the y-axis. Defaults to the second
objective in :attr: |
None
|
ax
|
matplotlib Axes
|
Axes to draw on. A new figure is created if None. |
None
|
Returns:
| Type | Description |
|---|---|
(fig, ax)
|
Matplotlib figure and axes objects. |
Raises:
| Type | Description |
|---|---|
ImportError
|
If matplotlib is not installed (skipped gracefully — returns
|
Source code in src/helicon/optimize/scan.py
helicon.optimize.scan.generate_scan_points(base_config, ranges, *, method='grid', seed=0, prescreening=False, min_mirror_ratio=1.5)
Generate scan points from parameter ranges.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_config
|
SimConfig
|
Base configuration to modify. |
required |
ranges
|
list of ParameterRange
|
Parameter axes to vary. |
required |
method
|
``"grid"`` | ``"lhc"``
|
|
'grid'
|
seed
|
int
|
Random seed for LHC sampling. |
0
|
prescreening
|
bool
|
If True, run Tier 1 analytical pre-screening after generating points.
Points with |
False
|
min_mirror_ratio
|
float
|
Minimum acceptable mirror ratio when |
1.5
|
Returns:
| Type | Description |
|---|---|
list of ScanPoint
|
|