Understanding how data is organized in Scarf

In this notebook, we provide a more detailed exploration of how the data is organized in Scarf. This can be useful for users who want to customize certain aspects of Scarf or want to extend its functionality.

[1]:
%load_ext autotime
%config InlineBackend.figure_format = 'retina'

import scarf
scarf.__version__
[1]:
'0.8.5'
time: 963 ms (started: 2021-08-22 18:49:03 +00:00)

We download the CITE-Seq dataset that we analyzed earlier

[2]:
scarf.fetch_dataset('tenx_8K_pbmc_citeseq', save_path='scarf_datasets', as_zarr=True)
INFO: Download started...
INFO: Download finished! File saved here: /home/docs/checkouts/readthedocs.org/user_builds/scarf/checkouts/0.8.5/docs/source/vignettes/scarf_datasets/tenx_8K_pbmc_citeseq/data.zarr.tar.gz
INFO: Extracting Zarr file for tenx_8K_pbmc_citeseq
2021-08-22 18:49:15 URL:https://storage.googleapis.com/cos-osf-prod-files-de-1/9d8e4be82c0b76dd8683a1f6659011c65277aa2d464853cfccd49e27835c4f88?response-content-disposition=attachment%3B%20filename%3D%22data.zarr.tar.gz%22%3B%20filename%2A%3DUTF-8%27%27data.zarr.tar.gz&GoogleAccessId=files-de-1%40cos-osf-prod.iam.gserviceaccount.com&Expires=1629658212&Signature=LO9XgLb908qHLUsHqoOltHPumvR1zEBjFkR1P57JZZrg9J3KjF8xu1ebxuWsh1aaV6lqiijeDgxKo2u1YnLx584GglvXispxX8R533bYrT%2FI2a44wJmV3iVMo0ZGEhPxbYKGAAZ3LtRab%2FRNIzC4UqIwzpOPwewCA%2BSVVnKA4Cy4uRiEu38L3D%2BleKXH5cslZwdC69xhqYUPQtYj%2BkdW%2FJana0tSUU2Nega2EX7Nh3eDDum%2F8LWjUBmKQo0%2F2RCcQyM8eu%2FyDmE%2FOH9fKZpjVJsnbOd%2BCs8gDY6FXVtSFC9vOIm5S3MD%2FT%2FEvXZjChtd7%2Bg4oCbTE%2Br8cIkikYByMA%3D%3D [68237599/68237599] -> "/home/docs/checkouts/readthedocs.org/user_builds/scarf/checkouts/0.8.5/docs/source/vignettes/scarf_datasets/tenx_8K_pbmc_citeseq/data.zarr.tar.gz" [1]
time: 12.4 s (started: 2021-08-22 18:49:04 +00:00)
[3]:
ds = scarf.DataStore('scarf_datasets/tenx_8K_pbmc_citeseq/data.zarr', nthreads=4)
time: 19.3 ms (started: 2021-08-22 18:49:17 +00:00)

1) Zarr trees

Scarf uses Zarr format to store raw counts as dense, chunked matrices. The Zarr file is in fact a directory that organizes the data in form of a tree. The count matrices, cell and features attributes, and all the data is stored in this Zarr hierarchy. Some of the key benefits of using Zarr over other formats like HDF5 are: - Parallel read and write access - Availability of compression algorithms like LZ4 that provides very high compression and decompression speeds. - Automatic storage of intermediate and processed data

In Scarf, the data is always synchronized between hard disk and RAM and as such there is no need to save data manually or synchronize the

We can inspect how the data is organized within the Zarr file using show_zarr_tree method of the DataStore.

[4]:
ds.show_zarr_tree(depth=1)
/
 ├── ADT
 ├── RNA
 └── cellData
time: 2.18 ms (started: 2021-08-22 18:49:17 +00:00)

By setting depth=1 we get a look of the top-levels in the Zarr hierarchy: ‘RNA’, ‘ADT’ and ‘cellData’. The top levels here are hence composed of two assays and the cellData level, which will be explained below. Scarf attempts to store most of the data on disk immediately after it is processed. Since, that data that we have loaded is already preprocessed, below we can see that the calculated cell attributes can now be found under the cellData level. The cell-wise statistics that were calculated using ‘RNA’ assay have ‘RNA’ preppended to the name of the column. Similarly the columns stating with ‘ADT’ were created using ‘ADT’ assay data.

[5]:
ds.show_zarr_tree(start='cellData')
cellData
 ├── ADT_UMAP1 (7865,) float32
 ├── ADT_UMAP2 (7865,) float32
 ├── ADT_leiden_cluster (7865,) int64
 ├── ADT_nCounts (7865,) float64
 ├── ADT_nFeatures (7865,) float64
 ├── I (7865,) bool
 ├── RNA_UMAP1 (7865,) float32
 ├── RNA_UMAP2 (7865,) float32
 ├── RNA_cluster (7865,) int64
 ├── RNA_leiden_cluster (7865,) int64
 ├── RNA_nCounts (7865,) float64
 ├── RNA_nFeatures (7865,) float64
 ├── RNA_percentMito (7865,) float64
 ├── RNA_percentRibo (7865,) float64
 ├── RNA_tSNE1 (7865,) float64
 ├── RNA_tSNE2 (7865,) float64
 ├── ids (7865,) <U18
 └── names (7865,) <U18
time: 4.1 ms (started: 2021-08-22 18:49:17 +00:00)

The ``I`` column : This column is used to keep track of valid and invalid cells. The values in this column are boolean indicating which cells where filtered out (hence False) during filtering process. One of can think of column I as the default subset of cells that will be used for analysis if any other subset is not specifially chosen explicitly by the users. Most methods of DataStore object accept a parameter called cell_key and the default value for this paramter is I, which means that only cells with True in I column will be used.

If we inspect one of the assay, we will see the following zarr groups. The raw data matrix is stored under counts group (more on it in the next section). featureData is the feature level equivalent of cellData. featureData is explained in further detail in the next section. The markers level will be explained further in the last section. summary_stats_I contains statistics about features. These statistics are generally created during feature selection step. ‘normed__I__hvgs’ is explained in detail in the fourth section of this documentation.

[6]:
ds.show_zarr_tree(start='RNA', depth=1)
RNA
 ├── counts (7865, 33538) uint32
 ├── featureData
 ├── markers
 ├── normed__I__hvgs
 └── summary_stats_I
time: 2.63 ms (started: 2021-08-22 18:49:17 +00:00)

The feature wise attributes for each assay are stored under the assays featureData level. The output below shows some of the feature level statistics.

[7]:
ds.show_zarr_tree(start='RNA/featureData', depth=1)
featureData
 ├── I (33538,) bool
 ├── I__hvgs (33538,) bool
 ├── dropOuts (33538,) int64
 ├── ids (33538,) <U15
 ├── nCells (33538,) int64
 └── names (33538,) <U16
time: 2.37 ms (started: 2021-08-22 18:49:17 +00:00)

2) Cell and feature attributes

The cell and feature level attributes can be accessed through DataStore, for example, using ds.cells and ds.RNA.feats respectively (both are objects of Metadata class). In this section we dive deeper into these attribute tables and try to perform CRUD operations on them.

head provides a quick look at the attribute tables.

[8]:
ds.cells.head()
[8]:
I ids names ADT_UMAP1 ADT_UMAP2 ADT_leiden_cluster ADT_nCounts ADT_nFeatures RNA_UMAP1 RNA_UMAP2 RNA_cluster RNA_leiden_cluster RNA_nCounts RNA_nFeatures RNA_percentMito RNA_percentRibo RNA_tSNE1 RNA_tSNE2
0 True AAACCCAAGATTGTGA-1 AAACCCAAGATTGTGA-1 -13.884989 9.510007 16 981.0 17.0 29.421942 -7.417822 4 3 6160.0 2194.0 8.668831 15.259740 -4.26865 -18.08830
1 True AAACCCACATCGGTTA-1 AAACCCACATCGGTTA-1 -5.097915 10.784672 15 1475.0 17.0 30.801842 -12.649998 4 3 6713.0 2093.0 6.316103 19.037688 5.02671 -14.88920
2 True AAACCCAGTACCGCGT-1 AAACCCAGTACCGCGT-1 -7.885502 12.961405 9 7149.0 17.0 29.397541 4.256729 5 5 3637.0 1518.0 8.056090 16.002200 -14.91320 -18.83030
3 True AAACCCAGTATCGAAA-1 AAACCCAGTATCGAAA-1 12.072053 11.531997 2 6831.0 17.0 5.441914 -32.227364 3 4 1244.0 737.0 9.003215 18.729904 16.69630 -3.35003
4 True AAACCCAGTCGTCATA-1 AAACCCAGTCGTCATA-1 13.928782 12.290624 2 6839.0 17.0 7.166490 -30.397615 3 4 2611.0 1240.0 6.204519 16.353887 18.37000 -1.37607
time: 41.4 ms (started: 2021-08-22 18:49:17 +00:00)

The feature attribute table from any assay can similarly be quickly inspected

[9]:
ds.RNA.feats.head()
[9]:
I ids names I__hvgs dropOuts nCells
0 False ENSG00000243485 MIR1302-2HG False 7865 0
1 False ENSG00000237613 FAM138A False 7865 0
2 False ENSG00000186092 OR4F5 False 7865 0
3 False ENSG00000238009 AL627309.1 False 7853 12
4 False ENSG00000239945 AL627309.3 False 7865 0
time: 16.2 ms (started: 2021-08-22 18:49:17 +00:00)

Even though the above ‘head’ command may make you think that ds.cells and ds.RNA.feats are Pandas dataframe, they in fact are not. If you wish to obtain a full table as Pandas dataframe, then you can export the columns of your choice as shown below.

[10]:
ds.cells.to_pandas_dataframe(['ids', 'RNA_UMAP1', 'RNA_UMAP2', 'RNA_cluster']).set_index('ids')
[10]:
RNA_UMAP1 RNA_UMAP2 RNA_cluster
ids
AAACCCAAGATTGTGA-1 29.421942 -7.417822 4
AAACCCACATCGGTTA-1 30.801842 -12.649998 4
AAACCCAGTACCGCGT-1 29.397541 4.256729 5
AAACCCAGTATCGAAA-1 5.441914 -32.227364 3
AAACCCAGTCGTCATA-1 7.166490 -30.397615 3
... ... ... ...
TTTGTTGGTTCAAGTC-1 -5.545609 21.742104 18
TTTGTTGGTTGCATGT-1 -26.972895 -0.762715 9
TTTGTTGGTTGCGGCT-1 27.239552 -5.230516 4
TTTGTTGTCGAGTGAG-1 -23.030291 -20.647554 2
TTTGTTGTCGTTCAGA-1 -10.845096 -2.644977 2

7865 rows × 3 columns

time: 16.5 ms (started: 2021-08-22 18:49:17 +00:00)

If you wish to export all the columns as a dataframe then simply provide ds.cells.columns or ds.RNA.feats.columns as an argument rather than a list of columns. If you are interested in one of the columns then it can fetched using either fetch or fetch_all command.

fetch will provide values for a subset of cells (by default, only for those that have True value in column I, but any other boolean column can be given).

[11]:
clusters = ds.cells.fetch('RNA_cluster')
clusters.shape
[11]:
(7549,)
time: 4.85 ms (started: 2021-08-22 18:49:17 +00:00)

fetch_all will return values for all the cells.

[12]:
clusters_all = ds.cells.fetch_all('RNA_cluster')
clusters_all.shape
[12]:
(7865,)
time: 3.2 ms (started: 2021-08-22 18:49:17 +00:00)

If you wish to add a new column then insert command can be used for either cell or feature attributes. insert method will take care of inserting the values in the right rows even when a subset of values are provided. The default value for key parameter is I in insertmethod, so it will add values in same order as cells that have True value for I

[13]:
is_clust_1 = clusters == 1   # is_clust_1 has just 7549 elements (same as the number of cells with value True in `I`)
ds.cells.insert(column_name='is_clust1', values=is_clust_1)
time: 3.84 ms (started: 2021-08-22 18:49:17 +00:00)

If we try to add values for a column that already exists then Scarf will throw an error. For example, if we simply rerun the command above, we should get an error

[14]:
ds.cells.insert(column_name='is_clust1', values=is_clust_1)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_6260/1353646084.py in <module>
----> 1 ds.cells.insert(column_name='is_clust1', values=is_clust_1)

~/checkouts/readthedocs.org/user_builds/scarf/envs/0.8.5/lib/python3.8/site-packages/scarf/metadata.py in insert(self, column_name, values, fill_value, key, overwrite, location, force)
    364             raise ValueError(f"ERROR: {col} is a protected column name in MetaData class.")
    365         if col in self.columns and overwrite is False:
--> 366             raise ValueError(f"ERROR: {col} already exists. Please set `overwrite` to True to overwrite.")
    367         if type(values) == list:
    368             logger.warning("'values' parameter is of `list` type and not `np.ndarray` as expected. The correct dtype "

ValueError: ERROR: is_clust1 already exists. Please set `overwrite` to True to overwrite.
time: 185 ms (started: 2021-08-22 18:49:17 +00:00)

To override this behaviour, you can use ‘overwrite’ parameter and set it to True

[15]:
ds.cells.insert(column_name='is_clust1', values=is_clust_1, overwrite=True)
time: 4.15 ms (started: 2021-08-22 18:49:17 +00:00)

Please checkout the API of the Metadata class to get information on how to perform other operations like delete and update columns.


3) Count matrices and data normalization

Scarf uses Zarr format so that data can be stored in rectangular chunks. The raw data is saved in the counts level within each assay level in the Zarr hierarchy. It can easily be accessed as a Dask array using the rawData attribute of the assay. Note that for a standard analysis one would not interact with the raw data directly. Scarf internally optimizes the use of this Dask array to minimize the memory requirement of all operations.

[16]:
ds.RNA.rawData
[16]:
Array Chunk
Bytes 1.06 GB 8.00 MB
Shape (7865, 33538) (2000, 1000)
Count 136 Tasks 136 Chunks
Type uint32 numpy.ndarray
33538 7865
time: 3.37 ms (started: 2021-08-22 18:49:17 +00:00)

The normalized data can be accessed through the normed method of the assay. In Scarf a user doesn’t need to perform normalization step manually, Scarf only stores the raw data and generates normalized data whenever needed. This means that Scarf may need to perform normalization several times. However, in practise we have noted that time spent in normalizing the data is only a small fraction of routine workflows.

[17]:
ds.RNA.normed()
[17]:
Array Chunk
Bytes 818.31 MB 8.11 MB
Shape (7549, 13550) (1934, 524)
Count 689 Tasks 136 Chunks
Type float64 numpy.ndarray
13550 7549
time: 12.4 ms (started: 2021-08-22 18:49:17 +00:00)

Users can override how the normalization is performed in Scarf. Normalization is performed using the normMethod attribute which references the function responsible for performing the normalization.

[18]:
ds.RNA.normMethod
[18]:
<function scarf.assay.norm_lib_size(assay, counts: <module 'dask.array' from '/home/docs/checkouts/readthedocs.org/user_builds/scarf/envs/0.8.5/lib/python3.8/site-packages/dask/array/__init__.py'>) -> <module 'dask.array' from '/home/docs/checkouts/readthedocs.org/user_builds/scarf/envs/0.8.5/lib/python3.8/site-packages/dask/array/__init__.py'>>
time: 1.65 ms (started: 2021-08-22 18:49:17 +00:00)

Let’s checkout the source of function that is referenced by ds.RNA.normMethod

[19]:
import inspect

print (inspect.getsource(ds.RNA.normMethod))
def norm_lib_size(assay, counts: daskarr) -> daskarr:
    return assay.sf * counts / assay.scalar.reshape(-1, 1)

time: 2.28 ms (started: 2021-08-22 18:49:17 +00:00)

Following is an example of how one can override the method of normalization

[20]:
def my_cool_normalization_method(assay, counts):
    import numpy as np

    lib_size = counts.sum(axis=1).reshape(-1, 1) # Calculate total counts for each cell
    return np.log2(counts/lib_size)  # Library size normalization followed by log2 transformation

ds.RNA.normMethod = my_cool_normalization_method
time: 499 µs (started: 2021-08-22 18:49:17 +00:00)

Now whenever Scarf internally requires normalized values, this function will be used. Scarf provides a dummy normalization function (scarf.assay.norm_dummy) that does not perform normalization. This function can be useful if you have pre-normalized data and need to disable default normalization.

[21]:
ds.RNA.normMethod = scarf.assay.norm_dummy
time: 231 µs (started: 2021-08-22 18:49:17 +00:00)

Please note: If you are using a custom function or disabling normalization, then everytime you load a DataStore object you will need to reassign normMethod to the function of your choice.


4) Data caching during graph creation

All the results of make_graph step are saved under a name on the form ‘normed__{cell key}__{feature key}’ (placeholders used in brackets here). In this case, since we did not provide a cell key it takes default value of I, which means all the cells that were not filtered out. The feature key (feat_key) was set to hvgs. The Zarr directory is organized such that all the intermediate data is also saved. The intermediate data is organized in a hierarchy which triggers recomputation when upstream changes are detected. The parameter values are also saved in hierarchy level names. For example, ‘reduction_pca_31_I’ means that PCA linear dimension reduction with 31 PC axes was used and the PCA was fit across all the cells that have True value in column I.

[22]:
ds.show_zarr_tree(start='RNA/normed__I__hvgs')
normed__I__hvgs
 ├── data (7549, 1995) float64
 ├── reduction__pca__15__I
 │   ├── ann__l2__50__50__48__4466
 │   │   └── knn__11
 │   │       ├── distances (7399, 11) float64
 │   │       ├── graph__1.0__1.5
 │   │       │   ├── edges (81389, 2) uint64
 │   │       │   └── weights (81389,) float64
 │   │       └── indices (7399, 11) uint64
 │   ├── kmeans__100__4466
 │   │   ├── cluster_centers (100, 15) float64
 │   │   └── cluster_labels (7399,) float64
 │   ├── mu (497,) float64
 │   ├── reduction (497, 15) float64
 │   └── sigma (497,) float64
 └── reduction__pca__30__I
     ├── ann__l2__63__63__48__4466
     │   └── knn__21
     │       ├── distances (7549, 21) float64
     │       ├── graph__1.0__1.5
     │       │   ├── dendrogram (7548, 4) float64
     │       │   ├── dendrogram_coalesced_20
     │       │   │   ├── edgelist (38, 2) uint64
     │       │   │   └── nodelist (39, 3) int64
     │       │   ├── edges (158529, 2) uint64
     │       │   └── weights (158529,) float64
     │       └── indices (7549, 21) uint64
     ├── kmeans__100__4466
     │   ├── cluster_centers (100, 30) float64
     │   └── cluster_labels (7549,) float64
     ├── mu (1995,) float64
     ├── reduction (1995, 30) float64
     └── sigma (1995,) float64
time: 8.95 ms (started: 2021-08-22 18:49:17 +00:00)

The graph calculated by make_graph can be easily loaded using the load_graph method, like below. The graph is loaded as a sparse matrix of the cells that were used for creating a graph.

Next, we show how the graph can be accessed if required. However, as stated above, normally Scarf handles the graph loading internally where required.

Because Scarf saves all the intermediate data, it might be the case that a lot of graphs are stored in the Zarr hierarchy. load_graph will load only the latest graph that was computed (for the given assay, cell key and feat key).

[23]:
ds.load_graph(from_assay='RNA', cell_key='I', feat_key='hvgs', symmetric=False, upper_only=False)
[23]:
<7549x7549 sparse matrix of type '<class 'numpy.float64'>'
        with 158529 stored elements in Compressed Sparse Row format>
time: 15.5 ms (started: 2021-08-22 18:49:17 +00:00)

The sparse matrix above will load the If you would like to load a graph generated using some other parameter, then simply run make_graph again using those parameters. make_graph will not recalculate the graph if it already exists and will simply set it as the latest graph. This graph can then be loaded using load_graph


5) Fetching marker features

Marker features (ex. marker genes) for a selection of cells can be calculated using run_marker_search. The markers for each group are stored under ‘marker’ group under the assay’s group. Within the ‘{assay}/markers’ level there are sublevels (only 1 here) created based on which cell set and cell groups were used: {cell_key}__{group_key}. Stored below this Zarr level are the individual group labels that contain the maker feature ids and their corresponding scores.

[24]:
ds.show_zarr_tree(start='RNA/markers', depth=2)
markers
 └── I__RNA_cluster
     ├── 1
     ├── 10
     ├── 11
     ├── 12
     ├── 13
     ├── 14
     ├── 15
     ├── 16
     ├── 17
     ├── 18
     ├── 19
     ├── 2
     ├── 20
     ├── 3
     ├── 4
     ├── 5
     ├── 6
     ├── 7
     ├── 8
     └── 9
time: 5.31 ms (started: 2021-08-22 18:49:17 +00:00)

Marker list for any group (e.x. cell cluster) can be fetched like below

[25]:
ds.get_markers(group_key='RNA_cluster', group_id=7)
[25]:
score names
ids
ENSG00000144290 0.696736 SLC4A10
ENSG00000062524 0.52321 LTK
ENSG00000143365 0.473731 RORC
ENSG00000111796 0.381826 KLRB1
ENSG00000113088 0.379597 GZMK
ENSG00000218357 0.362173 LINC01644
ENSG00000099282 0.359214 TSPAN15
ENSG00000197635 0.354979 DPP4
ENSG00000080573 0.311975 COL5A3
ENSG00000174946 0.297659 GPR171
ENSG00000204475 0.287263 NCR3
ENSG00000168824 0.282651 NSG1
ENSG00000109906 0.281518 ZBTB16
ENSG00000000971 0.273301 CFH
ENSG00000139187 0.269599 KLRG1
ENSG00000206561 0.252777 COLQ
time: 45.5 ms (started: 2021-08-22 18:49:17 +00:00)

One can also export the names of markers for all the groups to a CSV file like below:

[26]:
ds.export_markers_to_csv(group_key='RNA_cluster', csv_filename='test.csv')
time: 1.01 s (started: 2021-08-22 18:49:17 +00:00)

That is all for this vignette.