{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "#### Writing to Google Bigquery\n", "\n", "1. Insure you have a Google Bigquery service account key on disk\n", "2. The service key location is set as an environment variable **BQ_KEY**\n", "3. The dataset will be automatically created within the project associated with the service key\n", "\n", "The cell below creates a dataframe that will be stored within Google Bigquery" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "100%|██████████| 1/1 [00:00<00:00, 5440.08it/s]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "['data transport version ', '2.0.0']\n" ] } ], "source": [ "#\n", "# Writing to Google Bigquery database\n", "#\n", "import transport\n", "from transport import providers\n", "import pandas as pd\n", "import os\n", "\n", "PRIVATE_KEY = os.environ['BQ_KEY'] #-- location of the service key\n", "DATASET = 'demo'\n", "_data = pd.DataFrame({\"name\":['James Bond','Steve Rogers','Steve Nyemba'],'age':[55,150,44]})\n", "bqw = transport.factory.instance(provider=providers.BIGQUERY,dataset=DATASET,table='friends',context='write',private_key=PRIVATE_KEY)\n", "bqw.write(_data,if_exists='replace') #-- default is append\n", "print (['data transport version ', transport.__version__])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Reading from Google Bigquery\n", "\n", "The cell below reads the data that has been written by the cell above and computes the average age within a Google Bigquery (simple query). \n", "\n", "- Basic read of the designated table (friends) created above\n", "- Execute an aggregate SQL against the table\n", "\n", "**NOTE**\n", "\n", "It is possible to use **transport.factory.instance** or **transport.instance** they are the same. It allows the maintainers to know that we used a factory design pattern." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Downloading: 100%|\u001b[32m██████████\u001b[0m|\n", "Downloading: 100%|\u001b[32m██████████\u001b[0m|\n", " name age\n", "0 James Bond 55\n", "1 Steve Rogers 150\n", "2 Steve Nyemba 44\n", "--------- STATISTICS ------------\n", " _counts f0_\n", "0 3 83.0\n" ] } ], "source": [ "\n", "import transport\n", "from transport import providers\n", "import os\n", "PRIVATE_KEY=os.environ['BQ_KEY']\n", "pgr = transport.instance(provider=providers.BIGQUERY,dataset='demo',table='friends',private_key=PRIVATE_KEY)\n", "_df = pgr.read()\n", "_query = 'SELECT COUNT(*) _counts, AVG(age) from demo.friends'\n", "_sdf = pgr.read(sql=_query)\n", "print (_df)\n", "print ('--------- STATISTICS ------------')\n", "print (_sdf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The cell bellow show the content of an auth_file, in this case if the dataset/table in question is not to be shared then you can use auth_file with information associated with the parameters.\n", "\n", "**NOTE**:\n", "\n", "The auth_file is intended to be **JSON** formatted" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'dataset': 'demo', 'table': 'friends'}" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "\n", "{\n", " \n", " \"dataset\":\"demo\",\"table\":\"friends\"\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 2 }