# Introduction This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with **NoSQL**, **SQL** and **Cloud** data stores and leverages **pandas**. # Why Use Data-Transport ? Data transport is a simple framework that: - easy to install & modify (open-source) - enables access to multiple database technologies (pandas, SQLAlchemy) - enables notebook sharing without exposing database credential. - supports pre/post processing specifications (pipeline) ## Installation Within the virtual environment perform the following (the following will install everything): pip install data-transport[all]@git+https://github.com/lnyemba/data-transport.git Options to install components in square brackets are **nosql**; **cloud**; **other** and **warehouse** pip install data-transport[nosql,cloud,other, warehouse,all]@git+https://github.com/lnyemba/data-transport.git The components available: 0. sql by default netezza; mysql; postgresql; duckdb; sqlite3; sqlserver 1. nosql mongodb/ferretdb; couchdb 2. cloud s3; bigquery; databricks 3. other files; http; rabbitmq 4. warehouse apache drill; apache iceberg ## Additional features - Reads are separated from writes to avoid accidental writes. - Streaming (for large volumes of data) by specifying chunksize - CLI interface to add to registry, run ETL - Implements best-pracices for collaborative environments like apache zeppelin; jupyterhub; SageMaker; ... ## Learn More We have available notebooks with sample code to read/write against mongodb, couchdb, Netezza, PostgreSQL, Google Bigquery, Databricks, Microsoft SQL Server, MySQL ... Visit [data-transport homepage](https://healthcareio.the-phi.com/data-transport)