Skip to content

Rtrs Tonnes 2016 2018

s3://trase-storage/brazil/soy/indicators/out/RTRS_tonnes_2016_2018.csv

Dbt path: trase_production.main_brazil.rtrs_tonnes_2016_2018

Explore on Metabase: Full table; summary statistics

Containing yaml file link: trase/data_pipeline/models/brazil/soy/indicators/out/_schema.yml

Model file link: trase/data_pipeline/models/brazil/soy/indicators/out/rtrs_tonnes_2016_2018.py

Calls script: trase/data/brazil/indicators/actors/certification/rtrs/RTRS_tonnes_2016_2018.py

Dbt test runs & lineage: Test results ยท Lineage

Full dbt_docs page: Open in dbt docs (includes lineage graph -at the bottom right-, tests, and downstream dependencies)

Tags: mock_model, brazil, indicators, out, soy


rtrs_tonnes_2016_2018

Description

This model was auto-generated based off .yml 'lineage' files in S3. The DBT model just raises an error; the actual script that created the data lives elsewhere. The script is located at trase/data/brazil/indicators/actors/certification/rtrs/out/RTRS_tonnes_2016_2018.py [permalink]. It was last run by Harry Biddle.


Details

Column Type Description

Models / Seeds

  • source.trase_duckdb.trase-storage-raw.rtrs_2016
  • source.trase_duckdb.trase-storage-raw.rtrs_2017

Sources

  • ['trase-storage-raw', 'rtrs_2016']
  • ['trase-storage-raw', 'rtrs_2017']
# -*- coding: utf-8 -*-
from trase.tools.aws.aws_helpers import read_s3_object
import pandas as pd

from trase.tools.aws.metadata import write_csv_for_upload
from trase.tools.aws.tracker import S3Object

csv_delimiter = ","
decoder = "UTF8"
KEY_2016 = "brazil/indicators/actors/certification/rtrs/in/RTRS_2016.csv"
KEY_2017 = "brazil/indicators/actors/certification/rtrs/in/RTRS_2017.csv"


def get_dataframe(KEY):
    check_data = read_s3_object(KEY)
    headers = check_data[0].decode(decoder).rstrip().split(csv_delimiter)
    data = [row.decode(decoder).rstrip().split(csv_delimiter) for row in check_data[1:]]
    df = pd.DataFrame(data, columns=headers)
    return df


# Read the two datasets
df_2016 = get_dataframe(KEY_2016)
df_2017 = get_dataframe(KEY_2017)

# Rename columns and concatenate
df_2016.rename(columns={"Tons_RTRS": "tonnes"}, inplace=True)
df2016 = df_2016[["GEOCODE", "YEAR", "tonnes"]]
df_2017.rename(columns={"TONS_RTRS": "tonnes"}, inplace=True)
df2017 = df_2017[["GEOCODE", "YEAR", "tonnes"]]
output = pd.concat([df2016, df2017])

# Upload this table to AWS by first writing to csv buffer
write_csv_for_upload(
    output,
    "brazil/indicators/actors/certification/rtrs/out/RTRS_tonnes_2016_2018.csv",
    upstream=[S3Object(KEY_2016, "trase-storage"), S3Object(KEY_2017, "trase-storage")],
)
import pandas as pd


def model(dbt, cursor):
    dbt.source("trase-storage-raw", "rtrs_2016")
    dbt.source("trase-storage-raw", "rtrs_2017")

    raise NotImplementedError()
    return pd.DataFrame({"hello": ["world"]})