API Reference
This is the auto-generated reference for LipiDetective’s public API.
Models
- class TransformerNetwork(config: dict[str, Any], output_attentions: bool = False)[source]
- forward(src: Tensor, tgt: Tensor) Tensor | tuple[Tensor, list[Tensor]][source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ConvolutionalNetwork(config: dict[str, Any])[source]
- forward(x: Tensor) Tensor[source]
Forward pass of the convolutional network with three convolutional layers and pooling layers, followed by three fully connected linear layers.
- Parameters:
x (torch.Tensor) – input tensor of features with shape (batch_size, 2, n_peaks+1). Dimension 1 is size 2 as the tensor contains the m/z and intensity values of each peak. Dimension 2 is size n_peaks + 1 as the measurement mode (-1 for negative and +1 for positive) and the precursor mass are added.
- Returns:
output of the convolutional network with shape (batch_size, 3). Corresponds to the three masses of the lipid components (headgroup and two side chains) that are supposed to be predicted.
- Return type:
- class FeedForwardNetwork(config: dict[str, Any])[source]
- forward(x: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class RandomForest(config: dict[str, Any])[source]
-
- use_single_classifier(train_features: list[Any], train_labels: list[Any], test_features: list[Any]) tuple[Any, RandomForestClassifier][source]
- use_triple_classifier(train_features: list[Any], train_labels: list[Any], test_features: list[Any]) tuple[list[list[Any]], RandomForestClassifier, RandomForestClassifier, RandomForestClassifier][source]
- use_triple_regressor(train_features: list[Any], train_labels: list[Any], test_features: list[Any]) tuple[list[list[Any]], RandomForestRegressor, RandomForestRegressor, RandomForestRegressor][source]
Workflow
- class Trainer(config: dict[str, Any])[source]
The trainer class creates the lightning module and executes the specified workflow. It handles processing and splitting of the dataset.
- test() None[source]
This loop is for analyzing the models performance on a previously unseen labeled test dataset.
- tune_model(config: dict[str, Any], num_epochs: int, train_loader: DataLoader[Any], val_loader: DataLoader[Any], trainset_lipids: list[str], valset_lipids: list[str]) None[source]
- perform_data_split() DataSplit[source]
This method extracts the names of all datasets in the HDF5 input file and saves them in separate lists for the training and validation sets. These lists can than be used to iterate over the dataset using lazy loading if the whole dataset is too big to be loaded at once.
- class H5Dataset(config: dict[str, Any], dataset_names: list[str], lipid_librarian: LipidLibrary, file_path: str)[source]
This class implements a custom PyTorch dataset. It handles reading in the training data from HDF5 files. It overrides the __getitem__ method to return the processed spectrum and its metadata for a sample at a given index.
- dataset_names
list of dataset names in the HDF5 file group “all_datasets”, used to access samples by index
- Type:
- hdf5_file
the opened HDF5 file, set to None during initialization and set once first sample is requested
- Type:
h5py.Dataset
- decimal_accuracy
Integer indicating the decimal accuracy to which the mass spectra should be binned
- Type:
- lipid_librarian
LipidLibrarian instance used for generating the label for a sample
- Type:
- get_n_highest_peaks(spectrum: ndarray, n_peaks: int) ndarray[source]
Processes the spectrum for a sample in the H5Dataset to prepare it as input for the model.
- Parameters:
spectrum (np.ndarray) – the spectrum extracted from an HDF5 dataset containing m/z and intensity arrays
n_peaks (int) – maximum number of peaks to be fed into neural network
- Returns:
the spectrum containing the number of peaks specified in n_peaks and sorted by descending intensity
- Return type:
np.ndarray
- bin_spectrum(spectrum: ndarray) ndarray[source]
Truncates the m/z values at the decimal position defined in the config.yaml and sums up the intensities.
- Parameters:
spectrum (np.ndarray) – the m/z and intensity values of a dataset from the HDF5 file
- Returns:
the binned spectrum ordered by intensity
- Return type:
np.ndarray
- class LightningModule(config: dict[str, Any], evaluator: Evaluator, trainset_lipids: list[str] | None = None, valset_lipids: list[str] | None = None, testset_lipids: list[str] | None = None)[source]
- configure_optimizers() Any[source]
Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple. Optimization with multiple optimizers only works in the manual optimization mode.
- Returns:
Any of these 6 options.
Single optimizer.
List or Tuple of optimizers.
Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple
lr_scheduler_config).Dictionary, with an
"optimizer"key, and (optionally) a"lr_scheduler"key whose value is a single LR scheduler orlr_scheduler_config.None - Fit will run without any optimizer.
The
lr_scheduler_configis a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.lr_scheduler_config = { # REQUIRED: The scheduler instance "scheduler": lr_scheduler, # The unit of the scheduler's step size, could also be 'step'. # 'epoch' updates the scheduler on epoch end whereas 'step' # updates it after a optimizer update. "interval": "epoch", # How many epochs/steps should pass between calls to # `scheduler.step()`. 1 corresponds to updating the learning # rate after every epoch/step. "frequency": 1, # Metric to monitor for schedulers like `ReduceLROnPlateau` "monitor": "val_loss", # If set to `True`, will enforce that the value specified 'monitor' # is available when the scheduler is updated, thus stopping # training if not found. If set to `False`, it will only produce a warning "strict": True, # If using the `LearningRateMonitor` callback to monitor the # learning rate progress, this keyword can be used to specify # a custom logged name "name": None, }
When there are schedulers in which the
.step()method is conditioned on a value, such as thetorch.optim.lr_scheduler.ReduceLROnPlateauscheduler, Lightning requires that thelr_scheduler_configcontains the keyword"monitor"set to the metric name that the scheduler should be conditioned on.# The ReduceLROnPlateau scheduler requires a monitor def configure_optimizers(self): optimizer = Adam(...) return { "optimizer": optimizer, "lr_scheduler": { "scheduler": ReduceLROnPlateau(optimizer, ...), "monitor": "metric_to_track", "frequency": "indicates how often the metric is updated", # If "monitor" references validation metrics, then "frequency" should be set to a # multiple of "trainer.check_val_every_n_epoch". }, } # In the case of two optimizers, only one using the ReduceLROnPlateau scheduler def configure_optimizers(self): optimizer1 = Adam(...) optimizer2 = SGD(...) scheduler1 = ReduceLROnPlateau(optimizer1, ...) scheduler2 = LambdaLR(optimizer2, ...) return ( { "optimizer": optimizer1, "lr_scheduler": { "scheduler": scheduler1, "monitor": "metric_to_track", }, }, {"optimizer": optimizer2, "lr_scheduler": scheduler2}, )
Metrics can be made available to monitor by simply logging it using
self.log('metric_to_track', metric_val)in yourLightningModule.Note
Some things to know:
Lightning calls
.backward()and.step()automatically in case of automatic optimization.If a learning rate scheduler is specified in
configure_optimizers()with key"interval"(default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s.step()method automatically in case of automatic optimization.If you use 16-bit precision (
precision=16), Lightning will automatically handle the optimizer.If you use
torch.optim.LBFGS, Lightning handles the closure function automatically for you.If you use multiple optimizers, you will have to switch to ‘manual optimization’ mode and step them yourself.
If you need to control how often the optimizer steps, override the
optimizer_step()hook.
- training_step(batch: dict[str, Any], batch_idx: int) Tensor[source]
Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
- Parameters:
batch – The output of your data iterable, normally a
DataLoader.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns:
Tensor- The loss tensordict- A dictionary which can include any keys, but must include the key'loss'in the case of automatic optimization.None- In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.
In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.
Example:
def training_step(self, batch, batch_idx): x, y, z = batch out = self.encoder(x) loss = self.loss(out, x) return loss
To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:
def __init__(self): super().__init__() self.automatic_optimization = False # Multiple optimizers (e.g.: GANs) def training_step(self, batch, batch_idx): opt1, opt2 = self.optimizers() # do training_step with encoder ... opt1.step() # do training_step with decoder ... opt2.step()
Note
When
accumulate_grad_batches> 1, the loss returned here will be automatically normalized byaccumulate_grad_batchesinternally.
- validation_step(batch: dict[str, Any], batch_idx: int) Tensor[source]
Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.
- Parameters:
batch – The output of your data iterable, normally a
DataLoader.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns:
Tensor- The loss tensordict- A dictionary. Can include any keys, but must include the key'loss'.None- Skip to the next batch.
# if you have one val dataloader: def validation_step(self, batch, batch_idx): ... # if you have multiple val dataloaders: def validation_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
# CASE 1: A single validation dataset def validation_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'val_loss': loss, 'val_acc': val_acc})
If you pass in multiple val dataloaders,
validation_step()will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.# CASE 2: multiple validation dataloaders def validation_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. x, y = batch # implement your own out = self(x) if dataloader_idx == 0: loss = self.loss0(out, y) else: loss = self.loss1(out, y) # calculate acc labels_hat = torch.argmax(out, dim=1) acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs separately for each dataloader self.log_dict({f"val_loss_{dataloader_idx}": loss, f"val_acc_{dataloader_idx}": acc})
Note
If you don’t need to validate you don’t need to implement this method.
Note
When the
validation_step()is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.
- test_step(batch: dict[str, Any], batch_idx: int) Tensor[source]
Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.
- Parameters:
batch – The output of your data iterable, normally a
DataLoader.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns:
Tensor- The loss tensordict- A dictionary. Can include any keys, but must include the key'loss'.None- Skip to the next batch.
# if you have one test dataloader: def test_step(self, batch, batch_idx): ... # if you have multiple test dataloaders: def test_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
# CASE 1: A single test dataset def test_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'test_loss': loss, 'test_acc': test_acc})
If you pass in multiple test dataloaders,
test_step()will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.# CASE 2: multiple test dataloaders def test_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. x, y = batch # implement your own out = self(x) if dataloader_idx == 0: loss = self.loss0(out, y) else: loss = self.loss1(out, y) # calculate acc labels_hat = torch.argmax(out, dim=1) acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs separately for each dataloader self.log_dict({f"test_loss_{dataloader_idx}": loss, f"test_acc_{dataloader_idx}": acc})
Note
If you don’t need to test you don’t need to implement this method.
Note
When the
test_step()is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.
- predict_step(batch: dict[str, Any], batch_idx: int) None[source]
Step function called during
predict(). By default, it callsforward(). Override to add any processing logic.The
predict_step()is used to scale inference on multi-devices.To prevent an OOM error, it is possible to use
BasePredictionWritercallback to write the predictions to disk or database after each batch or on epoch end.The
BasePredictionWritershould be used while using a spawn based accelerator. This happens forTrainer(strategy="ddp_spawn")or training on 8 TPU cores withTrainer(accelerator="tpu", devices=8)as predictions won’t be returned.- Parameters:
batch – The output of your data iterable, normally a
DataLoader.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns:
Predicted output (optional).
Example
class MyModel(LightningModule): def predict_step(self, batch, batch_idx, dataloader_idx=0): return self(batch) dm = ... model = MyModel() trainer = Trainer(accelerator="gpu", devices=2) predictions = trainer.predict(model, dm)
- get_preds_vs_labels(batch_idx: int, output: Tensor, labels: Tensor, dataset_path: Tensor) Tensor[source]
Helpers
- class LipidLibrary[source]
- resolve_config_paths(config: dict[str, Any]) dict[str, Any][source]
Resolve all file paths in a configuration dictionary.
Paths can be: - Absolute paths: Used as-is - Relative paths: Resolved based on path type (data, model, output)
- Parameters:
config – Configuration dictionary from YAML.
- Returns:
Config with resolved absolute paths.
- get_project_root() Path[source]
Get the project root directory.
Searches upward from this file’s location for a directory containing pyproject.toml. Raises FileNotFoundError if not found (e.g. in an installed wheel layout), prompting the user to set environment variables instead.
- Returns:
Path to the project root directory.
- resolve_data_path(relative_path: str | Path) Path[source]
Resolve a data file path.
- Parameters:
relative_path – Path relative to data directory.
- Returns:
Absolute path to data file.
- Environment variables:
LIPIDETECTIVE_DATA_DIR: Override default data directory.
- resolve_model_path(relative_path: str | Path) Path[source]
Resolve a model file path.
- Parameters:
relative_path – Path relative to models directory.
- Returns:
Absolute path to model file.
- Environment variables:
LIPIDETECTIVE_MODELS_DIR: Override default models directory.