Provides access to datasets defined in Layer.
You can retrieve an instance of this object with
This class should not be initialized by end-users.
# Fetches the `titanic` dataset
Get logged data associated with this model and having the given tag. If the logged data is an image, then you can also pass a value for the step parameter.
- tag (str) --
- step (Optional[int**]) --
Log data for a particular (i.e. non-latest) dataset build.
For more details about logging in general, please look at layer.log() documentation.
- data (Mapping[str, Union[str, float, bool, int, List[Any], np.ndarray[Any, Any], Dict[str, Any], pandas.DataFrame, PIL.Image.Image, matplotlib.figure.Figure, matplotlib.axes._subplots.AxesSubplot, Image, module, Path, Markdown]**]) --
- step (Optional[int**]) --
- category (Optional[str**]) --
Fetches the dataset as a Pandas dataframe.
A Pandas dataframe containing your dataset.
Fetches the dataset as a Pytorch DataLoader. :param transformer: Function to apply the transformations to the data :param tensors: List of columns to fetch :param batch_size: how many samples per batch to load (default: 1). :param shuffle: set to True to have the data reshuffled at every epoch (default: False). :param sampler: defines the strategy to draw samples from the dataset. Can be any Iterable with len implemented. If specified, shuffle must not be specified. :param batch_sampler: like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last. :param num_workers: how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0) :param collate_fn: merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. :param pin_memory: If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type, see the example below. :param drop_last: set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False) :param timeout: if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0) :param worker_init_fn: If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None) :param prefetch_factor: Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers. (default: 2) :param persistent_workers: If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False) :param generator: If not None, this RNG will be used by RandomSampler to generate random indexes and multiprocessing to generate base_seed for workers. (default: None) :return: torch.utils.data.DataLoader
- transformer (Callable[[Any], Any**]) --
- tensors (Optional[List[str]]) --
- batch_size (Optional[int**]) --
- shuffle (bool) --
- sampler (Optional[Any**]) --
- batch_sampler (Optional[Any**]) --
- num_workers (int) --
- collate_fn (Optional[Callable[[Any], Any]]) --
- pin_memory (bool) --
- drop_last (bool) --
- timeout (float) --
- worker_init_fn (Optional[Callable[[int], None]]) --
- prefetch_factor (int) --
- persistent_workers (bool) --
- generator (Optional[Any**]) --