CHAP.saxswaxs package
Submodules
CHAP.saxswaxs.processor module
Processors used only by SAXSWAXS experiments.
- class CfProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None)
Bases:
ProcessorProcessor to calculate the correction factor Cf that, when multiplied by appropriately processed SAXSWAXS data obtained, converts data to absolute cross-section / intensity in inverse cm.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- process(data, interactive=False, save_figures=True, nxpath=None, radial_range=None, scan_step_indices=None, eps=1e-05)
Return a dictionary with the computed correction factor Cf and the configuration parameters.
- Parameters:
data (list[PipelineData]) – Input data list containing the reference data labelled with ‘reference_data’ as well as the NeXus input data with the azimuthally integrated SAXSWAXS data.
interactive (bool, optional) – Allows for user interactions, defaults to False.
save_figures (bool, optional) – Create Matplotlib correction factor image that can be saved to file downstream in the workflow, defaults to True.
nxpath (str, optional) – The path to a specific NeXus NXdata object in the input NeXus file tree to the measured data from.
radial_range (Union( list[float, float], tuple[float, float]), optional) – q-range used to compute Cf.
eps (float) – Minimum plotting value of the corrected azimuthally integrated SAXSWAXS data, default to 1.e-5.
- Variables:
scan_step_indices – Optional scan step indices to use for the calculation. If not specified, the correction factor will be computed on the average of all data for the scan.
- Returns:
Computed correction factor Cf and the configuration parameters plus the optional correction factor image as a CHAP.pipeline.PipelineData object.
- Return type:
Union[dict, (dict, PipelineData)]
- class FluxAbsorptionBackgroundCorrectionProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None)
Bases:
ExpressionProcessorProcessor for flux, absorption, and background correction. May also perform thickness correction.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- process(data, presample_intensity_reference_rate=None, sample_thickness_cm=None, sample_mu_inv_cm=None, nxprocess=False)
Given input data for ‘intensity’, ‘presample_intensity’, ‘postsample_intensity’, ‘background_presample_intensity’, ‘background_postsample_intensity’, ‘background_intensity’, and ‘dwell_time_actual’, return flux and absorption corrected intensity signal.
- Parameters:
data (list[PipelineData]) – Input data list containing all necessary data labelled with their proper names.
presample_intensity_reference_rate (float, optional) – Reference counting rate for the ‘presample_intensity’ signal. If not specified, it will be calculated with ‘np.nanmean(presample_intensity / dwell_time_actual)’. Defaults to None.
sample_thickness_cm (float, optional) – Sample thickness in centimeters. If specified, this processor will additionally perform thickness correction. Use of this parameter is mutualy exclusive with use of sample_mu_inv_cm. Defaults to None.
sample_mu_inv_cm (float, optional) – Sample linear attenuation coefficient in inverse centimeters. If specified, this processor will additionally perform thickness correction. Use of this parameter is mutualy exclusive with use of sample_thickness_cm. Defaults to None.
nxprocess – Flag to indicate the flux corrected data should be returned as an NXprocess. Defaults to False.
- Returns:
Flux and absprption corrected version of input ‘intensity’ data.
- Return type:
object
- class FluxAbsorptionCorrectionProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None)
Bases:
ExpressionProcessorProcessor for flux and absorption correction.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- process(data, presample_intensity_reference_rate=None, nxprocess=False)
Given input data for ‘intensity’, ‘presample_intensity’, ‘postsample_intensity’, ‘background_presample_intensity’, ‘background_postsample_intensity’, and ‘dwell_time_actual’, return flux and absorption corrected intensity signal.
- Parameters:
data (list[PipelineData]) – Input data list containing all necessary data labelled with their proper names.
presample_intensity_reference_rate (float, optional) – Reference counting rate for the ‘presample_intensity’ signal. If not specified, it will be calculated with ‘np.nanmean(presample_intensity / dwell_time_actual)’. Defaults to None.
nxprocess – Flag to indicate the flux corrected data should be returned as an NXprocess. Defaults to False.
- Returns:
Flux and absprption corrected version of input ‘intensity’ data.
- Return type:
object
- class FluxCorrectionProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None)
Bases:
ExpressionProcessorProcessor for flux correction.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- process(data, presample_intensity_reference_rate=None, nxprocess=False)
Given input data for ‘intensity’, ‘presample_intensity’, and ‘dwell_time_actual’, return flux corrected intensity signal.
- Parameters:
data (list[PipelineData]) – Input data list containing items with names ‘intensity’, ‘presample_intensity’, and (if presample_intensity_reference_rate is not specified) ‘dwell_time_actual’.
presample_intensity_reference_rate (float, optional) – Reference counting rate for the ‘presample_intensity’ signal. If not specified, it will be calculated with ‘np.nanmean(presample_intensity / dwell_time_actual)’. Defaults to None.
nxprocess – Flag to indicate the flux corrected data should be returned as an NXprocess. Defaults to False.
- Returns:
Flux corrected version of input ‘intensity’ data.
- Return type:
object
- class PyfaiIntegrationProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, pipeline_fields: dict = {'config': 'common.models.integration.PyfaiIntegrationConfig'}, config: PyfaiIntegrationConfig)
Bases:
ProcessorProcessor for performing pyFAI integrations.
- Variables:
config – PyfaiIntegrationConfig
- config: PyfaiIntegrationConfig
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- pipeline_fields: dict
- process(data, idx_slices=[{'start': 0, 'step': 1}])
Perform a set of integrations on 2D detector data.
- Parameters:
data (list[PipelineData]) – input 2D detector data
idx_slices (list[dict[str, int]], defaults to) – List of dicionaries identifying the sliced index at which the output data should be written in a dataset. Optional.
[{‘start’:0, ‘step’: 1}] :return: List of dictionaries ready for use with
saxswaxs.ZarrResultsWriter or saxswaxs.NexusResultsWriter.
- Return type:
list[dict[str, object]]
- class SetupProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, pipeline_fields: dict = {'map_config': 'common.models.map.MapConfig', 'pyfai_config': 'common.models.integration.PyfaiIntegrationConfig'}, map_config: MapConfig = None, pyfai_config: PyfaiIntegrationConfig, detectors: Annotated[list[Detector], Len(min_length=1, max_length=None)], dataset_shape: Annotated[list[Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]], Len(min_length=1, max_length=None)] | None = None, dataset_chunks: Literal['auto'] | Annotated[list[Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]], Len(min_length=1, max_length=None)] | None = 'auto', num_chunk: Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None] | None = 1, raw_data: bool | None = True)
Bases:
ProcessorConvenience Processor for setting up a container for SAXS/WAXS experiments.
- Variables:
detectors – List of basic detector configuration parameters.
dataset_shape – Shape of the completed dataset that will be processed later on (shape of the measurement itself, _not_ including the dimensions of any signals collected at each point in that measurement).
dataset_chunks – Extent of chunks along each dimension of the completed dataset / measurement. Choose this according to how you will process your data – for example, if your dataset_shape is [m, n], and you are planning to process each of the m rows as chunks, dataset_chunks should be [1, n]. But if you plan to process each of the n columns as chunks, dataset_chunks should be [m, 1].
num_chunk – Used only if dataset_chunks is “auto”. Preferred number of chunks in the dataset. Defaults to 1.
raw_data – Flag to indicate wether or not space for raw detector data should be included in the returned Zarr structure; defaults to True.
- dataset_chunks: Literal['auto'] | Annotated[list[Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]], Len(min_length=1, max_length=None)] | None
- dataset_shape: Annotated[list[Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]], Len(min_length=1, max_length=None)] | None
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- num_chunk: Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None] | None
- pipeline_fields: dict
- process(data)
Extract the contents of the input data, add a string to it, and return the amended value.
- Parameters:
data – Input data.
- Returns:
Processed data.
- pyfai_config: PyfaiIntegrationConfig
- raw_data: bool | None
- class SetupResultsProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, pipeline_fields: dict = {'config': 'common.models.integration.PyfaiIntegrationConfig'}, config: PyfaiIntegrationConfig, dataset_shape: Annotated[list[Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]], Len(min_length=1, max_length=None)], dataset_chunks: Literal['auto'] | Annotated[list[Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]], Len(min_length=1, max_length=None)] | None = 'auto')
Bases:
ProcessorProcessor for creating an intital zarr structure with empty datasets for filling in by saxswaxs.PyfaiIntegrationProcessor and common.ZarrValuesWriter.
- Variables:
dataset_shape – Shape of the completed dataset that will be processed later on (shape of the measurement itself, _not_ including the dimensions of any signals collected at each point in that measurement).
dataset_chunks – Extent of chunks along each dimension of the completed dataset / measurement. Choose this according to how you will process your data – for example, if your dataset_shape is [m, n], and you are planning to process each of the m rows as chunks, dataset_chunks should be [1, n]. But if you plan to process each of the n columns as chunks, dataset_chunks should be [m, 1].
- config: PyfaiIntegrationConfig
- dataset_chunks: Literal['auto'] | Annotated[list[Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]], Len(min_length=1, max_length=None)] | None
- dataset_shape: Annotated[list[Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]], Len(min_length=1, max_length=None)]
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- pipeline_fields: dict
- process(data)
Return a zarr.group to hold processed SAXS/WAXS data processed by saxswaxs.PyfaiIntegrationProcessor.
- Parameters:
data (list[PipelineData]) – Input data (configurations).
- Returns:
Empty structure for filling in SAXS/WAXS data
- Return type:
zarr.group
- zarr_setup(tree)
Return a zarr.group based on a dictionary representing a zarr tree of groups and arrays.
- Parameters:
tree (dict[str, object]) – Nested dictionary representing a zarr tree of groups and arrays.
- Returns:
Zarr group corresponding to the contents of tree.
- Return type:
zarr.group
- class UnstructuredToStructuredProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None)
Bases:
ProcessorProcessor to aggregate “unstructured” data into a single NXdata with a “structured” representation.
- get_common_axes(signals)
Determine the common leading axes shared by all signals.
This method computes the longest common prefix of axis names across all signal definitions. Only axes that appear in the same order at the beginning of each signal’s
axeslist are included in the result.This is used to identify the shared coordinate dimensions for a structured NXdata group.
- Parameters:
signals (list[dict]) – Validated signal definitions. Each signal must define an
"axes"key containing an ordered list of axis names.- Returns:
List of axis names that form the common leading axes for all signals. Returns an empty list if no common prefix exists.
- Return type:
list[str]
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- process(data, fields, name='data', attrs=None)
Return an NXdata object containing a single structured dataset composed from multiple unstructured input datasets.
This method validates the field configuration, validates and reshapes the input data, determines common axes across all signals, and constructs a NeXus NXdata group containing signal and axis fields.
- Parameters:
data (list[PipelineData]) – Input data objects containing unstructured datasets.
fields (list[dict[str, object]]) –
Configuration describing how to structure the input data. This is a list of dictionaries. Each dictionary must contain the required keys:
"name": Name of the data item, which must correspond to thenamefield of an item indata."type": Either"signal"or"axis"."axes": Required only for items where"type"is"signal". List of the names of the fields containing coordinate axes data for each dimension of the signal.
Optional keys include: -
"attrs": Dictionary of NeXus attributes to attach toname (str) – Name of the resulting NXdata group.
attrs (dict[str, object] or None) – Attributes to attach to the resulting NXdata group. The common axes determined during processing will be added to this dictionary under the
"axes"key.
- Returns:
A structured NeXus NXdata object containing all signals and axes defined by the configuration.
- Return type:
nexusformat.nexus.NXdata
- structure_signal_values(signals, axes, common_axes)
Reshape and populate structured signal arrays using common axes.
This method determines computes index mappings from raw axis values to their unique sorted representations, and inserts each signal’s unstructured data into its preallocated structured array.
Only the common axes are used for structuring; any trailing, signal-specific axes are assumed to have already been handled when allocating the structured signal arrays.
- Parameters:
signals (list[dict]) – Signal definitions with raw and preallocated structured data arrays.
axes (list[dict]) – Axis definitions containing raw values and unique values.
common_axes (list[str]) – Ordered list of the names of the dataset’s common axes.
- Returns:
Updated signal definitions with populated structured arrays
Unmodified axis definitions
List of common axis names shared by all signals
- Return type:
tuple[list[dict], list[dict], list[str]]
- validate_config_fields(fields)
Validate and normalize the field configuration.
This method separates the input field configuration into signal and axis definitions, performs basic validation, and ensures that all axes referenced by signals are defined as axis fields.
The returned signal and axis dictionaries are normalized into a consistent internal representation used by later processing stages.
- Parameters:
fields (list[dict[str, object]]) – Configuration describing how input data should be structured. Each item must define a
"name"and"type"key, where"type"is either"signal"or"axis". Signal entries must additionally define an"axes"list.- Returns:
Tuple of validated signal and axis definitions.
- Return type:
tuple[list[dict], list[dict]]
- Raises:
ValueError – If a signal references an axis that is not defined, or if a signal is defined before any axes exist.
- validate_data(data, signals, axes)
Validate and normalize input data for axes and signals.
This method retrieves raw input data for each axis and signal, propagates metadata attributes, computes unique axis values, and allocates structured arrays for signal data.
- For each axis:
The raw data is loaded
Attributes are merged (without overwriting user-specified ones)
Unique axis values are computed
- For each signal:
The raw data is loaded
Attributes are merged (without overwriting user-specified ones)
A structured output array is allocated based on its axes
The total signal size is validated against the expected shape
- Parameters:
data (list[PipelineData]) – Input unstructured data items.
signals (list[dict]) – Validated signal field definitions.
axes (list[dict]) – Validated axis field definitions.
- Returns:
Updated signal and axis definitions with populated values and derived metadata.
- Return type:
tuple[list[dict], list[dict]]
- Raises:
ValueError – If a signal’s data size does not match the expected size derived from its axes.
- class UpdateValuesProcessor(*, root: Annotated[Path, PathType(path_type=dir)] | None = '/home/runner/work/ChessAnalysisPipeline/ChessAnalysisPipeline/docs', inputdir: Annotated[Path, PathType(path_type=dir)] | None = None, outputdir: Annotated[Path, PathType(path_type=dir)] | None = None, interactive: bool | None = False, log_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] | None = 'INFO', logger: Logger | None = None, name: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, schema: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)] | None = None, pipeline_fields: dict = {'map_config': 'common.models.map.MapConfig', 'pyfai_config': 'common.models.integration.PyfaiIntegrationConfig'}, map_config: MapConfig = None, pyfai_config: PyfaiIntegrationConfig, spec_file: Annotated[Path, PathType(path_type=file)], scan_number: Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None], detectors: Annotated[list[Detector], Len(min_length=1, max_length=None)], raw_data: bool | None = True)
Bases:
ProcessorProcesses a slice of data for updating values in an existing container for a SAXS/WAXS experiment.
- Variables:
map_config – Map Configuration.
pyfai_config – PyFAI integration configuration.
spec_file – SPEC file containing scan from which to read and process a slice of raw data.
scan_number – Number of scan from which to read and process a slice of raw data.
detectors – List of detector configurations.
raw_data – Flag to indicate wether or not space for raw detector data should be included in the values returned; defaults to True.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- pipeline_fields: dict
- process(data, idx_slice={'start': 0, 'step': 1})
Extract the contents of the input data, add a string to it, and return the amended value.
- Parameters:
data – Input data.
- Returns:
Processed data.
- pyfai_config: PyfaiIntegrationConfig
- raw_data: bool | None
- scan_number: Annotated[int, None, Interval(gt=0, ge=None, lt=None, le=None), None]
- spec_file: Annotated[Path, PathType(path_type=file)]
CHAP.saxswaxs.reader module
SAXSWAXS command line reader.
CHAP.saxswaxs.writer module
SAXSWAXS command line writer.
Module contents
This subpackage contains PipelineItems unique to SAXSWAXS data processing workflows.