Steve L. Nyemba
31556ebd32
|
5 months ago | |
---|---|---|
bin | 6 months ago | |
info | 5 months ago | |
notebooks | 6 months ago | |
transport | 5 months ago | |
.gitignore | 11 months ago | |
README.md | 5 months ago | |
requirements.txt | 7 years ago | |
setup.py | 5 months ago |
README.md
Introduction
This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with NoSQL, SQL and Cloud data stores and leverages pandas.
Why Use Data-Transport ?
Mostly data scientists that don't really care about the underlying database and would like a simple and consistent way to read/write and move data are well served. Additionally we implemented lightweight Extract Transform Loading API and command line (CLI) tool. Finally it is possible to add pre/post processing pipeline functions to read/write
- Familiarity with pandas data-frames
- Connectivity drivers are included
- Reading/Writing data from various sources
- Useful for data migrations or ETL
Installation
Within the virtual environment perform the following :
pip install git+https://github.com/lnyemba/data-transport.git
Features
- read/write from over a dozen databases
- run ETL jobs seamlessly
- scales and integrates into shared environments like apache zeppelin; jupyterhub; SageMaker; ...
What's new
Unlike older versions 2.0 and under, we focus on collaborative environments like jupyter-x servers; apache zeppelin:
1. Simpler syntax to create reader or writer
2. auth-file registry that can be referenced using a label
3. duckdb support
Learn More
We have available notebooks with sample code to read/write against mongodb, couchdb, Netezza, PostgreSQL, Google Bigquery, Databricks, Microsoft SQL Server, MySQL ... Visit data-transport homepage