Skip to main content

API Reference

ApiServer#

class ApiServer(Server)

API server implementation.

APIUsage
Publicfrom frictionless.plugins.server import ApiParser

BigqueryDialect#

class BigqueryDialect(Dialect)

Bigquery dialect representation

APIUsage
Publicfrom frictionless.plugins.bigquery import BigqueryDialect

Arguments:

  • descriptor? str|dict - descriptor
  • project str - project
  • dataset? str - dataset
  • table? str - table

Raises:

  • FrictionlessException - raise any error that occurs during the process

BigqueryParser#

class BigqueryParser(Parser)

Bigquery parser implementation.

APIUsage
Publicfrom frictionless.plugins.bigquery import BigqueryParser

BigqueryPlugin#

class BigqueryPlugin(Plugin)

Plugin for BigQuery

APIUsage
Publicfrom frictionless.plugins.bigquery import BigqueryPlugin

BigqueryStorage#

class BigqueryStorage(Storage)

BigQuery storage implementation

APIUsage
Publicfrom frictionless.plugins.bigquery import BigqueryStorage

Arguments:

  • service object - BigQuery Service object
  • project str - BigQuery project name
  • dataset str - BigQuery dataset name
  • prefix? str - prefix for all names

BufferControl#

class BufferControl(Control)

Buffer control representation

APIUsage
Publicfrom frictionless.plugins.buffer import BufferControl

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

BufferLoader#

class BufferLoader(Loader)

Buffer loader implementation.

APIUsage
Publicfrom frictionless.plugins.buffer import BufferLoader

BufferPlugin#

class BufferPlugin(Plugin)

Plugin for Buffer Data

APIUsage
Publicfrom frictionless.plugins.local import BufferPlugin

Check#

class Check(Metadata)

Check representation.

APIUsage
Publicfrom frictionless import Checks

It's an interface for writing Frictionless checks.

Arguments:

  • descriptor? str|dict - schema descriptor

Raises:

  • FrictionlessException - raise if metadata is invalid

check.resource#

| @property
| resource()

Returns:

  • Resource? - resource object available after the check.connect call

check.connect#

| connect(resource)

Connect to the given resource

Arguments:

  • resource Resource - data resource

check.validate_start#

| validate_start()

Called to validate the resource after opening

Yields:

  • Error - found errors

check.validate_row#

| validate_row(row)

Called to validate the given row (on every row)

Arguments:

  • row Row - table row

Yields:

  • Error - found errors

check.validate_end#

| validate_end()

Called to validate the resource before closing

Yields:

  • Error - found errors

CkanDialect#

class CkanDialect(Dialect)

Ckan dialect representation

APIUsage
Publicfrom frictionless.plugins.ckan import CkanDialect

Arguments:

  • descriptor? str|dict - descriptor
  • resource? str - resource
  • dataset? str - dataset
  • apikey? str - apikey

Raises:

  • FrictionlessException - raise any error that occurs during the process

CkanParser#

class CkanParser(Parser)

Ckan parser implementation.

APIUsage
Publicfrom frictionless.plugins.ckan import CkanParser

CkanPlugin#

class CkanPlugin(Plugin)

Plugin for CKAN

APIUsage
Publicfrom frictionless.plugins.ckan import CkanPlugin

CkanStorage#

class CkanStorage(Storage)

Ckan storage implementation

Arguments:

  • url string - CKAN instance url e.g. "https://demo.ckan.org"

  • dataset string - dataset id in CKAN e.g. "my-dataset"

  • apikey? str - API key for CKAN e.g. "51912f57-a657-4caa-b2a7-0a1c16821f4b"

    APIUsage
    Publicfrom frictionless.plugins.ckan import CkanStorage

Control#

class Control(Metadata)

Control representation

APIUsage
Publicfrom frictionless import Control

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

CsvDialect#

class CsvDialect(Dialect)

Csv dialect representation

APIUsage
Publicfrom frictionless.plugins.csv import CsvDialect

Arguments:

  • descriptor? str|dict - descriptor
  • delimiter? str - csv delimiter
  • line_terminator? str - csv line terminator
  • quote_char? str - csv quote char
  • double_quote? bool - csv double quote
  • escape_char? str - csv escape char
  • null_sequence? str - csv null sequence
  • skip_initial_space? bool - csv skip initial space
  • comment_char? str - csv comment char

Raises:

  • FrictionlessException - raise any error that occurs during the process

csvDialect.delimiter#

| @Metadata.property
| delimiter()

Returns:

  • str - delimiter

csvDialect.line_terminator#

| @Metadata.property
| line_terminator()

Returns:

  • str - line terminator

csvDialect.quote_char#

| @Metadata.property
| quote_char()

Returns:

  • str - quote char

csvDialect.double_quote#

| @Metadata.property
| double_quote()

Returns:

  • bool - double quote

csvDialect.escape_char#

| @Metadata.property
| escape_char()

Returns:

  • str? - escape char

csvDialect.null_sequence#

| @Metadata.property
| null_sequence()

Returns:

  • str? - null sequence

csvDialect.skip_initial_space#

| @Metadata.property
| skip_initial_space()

Returns:

  • bool - if skipping initial space

csvDialect.comment_char#

| @Metadata.property
| comment_char()

Returns:

  • str? - comment char

csvDialect.expand#

| expand()

Expand metadata

csvDialect.to_python#

| to_python()

Conver to Python's csv.Dialect

CsvParser#

class CsvParser(Parser)

CSV parser implementation.

APIUsage
Public`from frictionless.plugins.csv import CsvPlugins

CsvPlugin#

class CsvPlugin(Plugin)

Plugin for Pandas

APIUsage
Publicfrom frictionless.plugins.csv import CsvPlugin

Detector#

class Detector()

Detector representation

APIUsage
Publicfrom frictionless import Detector

Arguments:

  • buffer_size? int - The amount of bytes to be extracted as a buffer. It defaults to 10000
  • sample_size? int - The amount of rows to be extracted as a samle. It defaults to 100
  • field_type? str - Enforce all the inferred types to be this type. For more information, please check "Describing Data" guide.
  • field_names? str[] - Enforce all the inferred fields to have provided names. For more information, please check "Describing Data" guide.
  • field_confidence? float - A number from 0 to 1 setting the infer confidence. If 1 the data is guaranteed to be valid against the inferred schema. For more information, please check "Describing Data" guide. It defaults to 0.9
  • field_float_numbers? bool - Flag to indicate desired number type. By default numbers will be Decimal; if True - float. For more information, please check "Describing Data" guide. It defaults to False
  • field_missing_values? str[] - String to be considered as missing values. For more information, please check "Describing Data" guide. It defaults to ['']
  • schema_sync? bool - Whether to sync the schema. If it sets to True the provided schema will be mapped to the inferred schema. It means that, for example, you can provide a subset of fileds to be applied on top of the inferred fields or the provided schema can have different order of fields.
  • schema_patch? dict - A dictionary to be used as an inferred schema patch. The form of this dictionary should follow the Schema descriptor form except for the fields property which should be a mapping with the key named after a field name and the values being a field patch. For more information, please check "Extracting Data" guide.

detector.detect_encoding#

| detect_encoding(buffer, *, encoding=None)

Detect encoding from buffer

Arguments:

  • buffer byte - byte buffer

Returns:

  • str - encoding

detector.detect_layout#

| detect_layout(sample, *, layout=None)

Detect layout from sample

Arguments:

  • sample any[][] - data sample
  • layout? Layout - data layout

Returns:

  • Layout - layout

detector.detect_schema#

| detect_schema(fragment, *, labels=None, schema=None)

Detect schema from fragment

Arguments:

  • fragment any[][] - data fragment
  • labels? str[] - data labels
  • schema? Schema - data schema

Returns:

  • Schema - schema

Dialect#

class Dialect(Metadata)

Dialect representation

APIUsage
Publicfrom frictionless import Dialect

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

Error#

class Error(Metadata)

Error representation

APIUsage
Publicfrom frictionless import errors

Arguments:

  • descriptor? str|dict - error descriptor
  • note str - an error note

Raises:

  • FrictionlessException - raise any error that occurs during the process

error.note#

| @property
| note()

Returns:

  • str - note

error.message#

| @property
| message()

Returns:

  • str - message

ExcelDialect#

class ExcelDialect(Dialect)

Excel dialect representation

APIUsage
Publicfrom frictionless.plugins.excel import ExcelDialect

Arguments:

  • descriptor? str|dict - descriptor
  • sheet? int|str - number from 1 or name of an excel sheet
  • workbook_cache? dict - workbook cache
  • fill_merged_cells? bool - whether to fill merged cells
  • preserve_formatting? bool - whither to preserve formatting
  • adjust_floating_point_error? bool - whether to adjust floating point error

Raises:

  • FrictionlessException - raise any error that occurs during the process

excelDialect.sheet#

| @Metadata.property
| sheet()

Returns:

  • str|int - sheet

excelDialect.workbook_cache#

| @Metadata.property
| workbook_cache()

Returns:

  • dict - workbook cache

excelDialect.fill_merged_cells#

| @Metadata.property
| fill_merged_cells()

Returns:

  • bool - fill merged cells

excelDialect.preserve_formatting#

| @Metadata.property
| preserve_formatting()

Returns:

  • bool - preserve formatting

excelDialect.adjust_floating_point_error#

| @Metadata.property
| adjust_floating_point_error()

Returns:

  • bool - adjust floating point error

excelDialect.expand#

| expand()

Expand metadata

ExcelPlugin#

class ExcelPlugin(Plugin)

Plugin for Excel

APIUsage
Publicfrom frictionless.plugins.excel import ExcelPlugin

Field#

class Field(Metadata)

Field representation

APIUsage
Publicfrom frictionless import Field

Arguments:

  • descriptor? str|dict - field descriptor
  • name? str - field name (for machines)
  • title? str - field title (for humans)
  • descriptor? str - field descriptor
  • type? str - field type e.g. string
  • format? str - field format e.g. default
  • missing_values? str[] - missing values
  • constraints? dict - constraints
  • rdf_type? str - RDF type
  • schema? Schema - parent schema object

Raises:

  • FrictionlessException - raise any error that occurs during the process

field.name#

| @Metadata.property
| name()

Returns:

  • str - name

field.title#

| @Metadata.property
| title()

Returns:

  • str? - title

field.description#

| @Metadata.property
| description()

Returns:

  • str? - description

field.type#

| @Metadata.property
| type()

Returns:

  • str - type

field.format#

| @Metadata.property
| format()

Returns:

  • str - format

field.missing_values#

| @Metadata.property
| missing_values()

Returns:

  • str[] - missing values

field.constraints#

| @Metadata.property
| constraints()

Returns:

  • dict - constraints

field.rdf_type#

| @Metadata.property
| rdf_type()

Returns:

  • str? - RDF Type

field.required#

| @Metadata.property(
| write=lambda self, value: setitem(self.constraints, "required", value)
| )
| required()

Returns:

  • bool - if field is requried

field.builtin#

| @property
| builtin()

Returns:

  • bool - returns True is the type is not custom

field.schema#

| @property
| schema()

Returns:

  • Schema? - parent schema

field.array_item#

| @Metadata.property
| array_item()

Returns:

  • dict - field descriptor

field.array_item_field#

| @Metadata.property(write=False)
| array_item_field()

Returns:

  • dict - field descriptor

field.true_values#

| @Metadata.property
| true_values()

Returns:

  • str[] - true values

field.false_values#

| @Metadata.property
| false_values()

Returns:

  • str[] - false values

field.bare_number#

| @Metadata.property
| bare_number()

Returns:

  • bool - if a bare number

field.float_number#

| @Metadata.property
| float_number()

Returns:

  • bool - whether it's a floating point number

field.decimal_char#

| @Metadata.property
| decimal_char()

Returns:

  • str - decimal char

field.group_char#

| @Metadata.property
| group_char()

Returns:

  • str - group char

field.expand#

| expand()

Expand metadata

field.read_cell#

| read_cell(cell)

Read cell

Arguments:

  • cell any - cell

Returns:

(any, OrderedDict): processed cell and dict of notes

field.read_cell_convert#

| read_cell_convert(cell)

Read cell (convert only)

Arguments:

  • cell any - cell

Returns:

  • any/None - processed cell or None if an error

field.read_cell_checks#

| @Metadata.property(write=False)
| read_cell_checks()

Read cell (checks only)

Returns:

  • OrderedDict - dictionlary of check function by a constraint name

field.write_cell#

| write_cell(cell, *, ignore_missing=False)

Write cell

Arguments:

  • cell any - cell to convert
  • ignore_missing? bool - don't convert None values

Returns:

(any, OrderedDict): processed cell and dict of notes

field.write_cell_convert#

| write_cell_convert(cell)

Write cell (convert only)

Arguments:

  • cell any - cell

Returns:

  • any/None - processed cell or None if an error

field.write_cell_missing_value#

| @Metadata.property(write=False)
| write_cell_missing_value()

Write cell (missing value only)

Returns:

  • str - a value to replace None cells

File#

class File()

File representation

FrictionlessException#

class FrictionlessException(Exception)

Main Frictionless exception

APIUsage
Publicfrom frictionless import FrictionlessException

Arguments:

  • error Error - an underlaying error

frictionlessException.error#

| @property
| error()

Returns:

  • Error - error

GsheetsDialect#

class GsheetsDialect(Dialect)

Gsheets dialect representation

APIUsage
Publicfrom frictionless.plugins.gsheets import GsheetsDialect

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

gsheetsDialect.credentials#

| @Metadata.property
| credentials()

Returns:

  • str - credentials

GsheetsParser#

class GsheetsParser(Parser)

Google Sheets parser implementation.

APIUsage
Publicfrom frictionless.plugins.gsheets import GsheetsParser

GsheetsPlugin#

class GsheetsPlugin(Plugin)

Plugin for Google Sheets

APIUsage
Publicfrom frictionless.plugins.gsheets import GsheetsPlugin

Header#

class Header(list)

Header representation

APIUsage
Publicfrom frictionless import Header

Constructor of this object is not Public API

Arguments:

  • labels any[] - header row labels
  • fields Field[] - table fields
  • field_positions int[] - field positions
  • row_positions int[] - row positions
  • ignore_case bool - ignore case

header.labels#

| @cached_property
| labels()

Returns:

  • Schema - table labels

header.fields#

| @cached_property
| fields()

Returns:

  • Schema - table fields

header.field_names#

| @cached_property
| field_names()

Returns:

  • str[] - table field names

header.field_positions#

| @cached_property
| field_positions()

Returns:

  • int[] - table field positions

header.row_positions#

| @cached_property
| row_positions()

Returns:

  • int[] - table row positions

header.missing#

| @cached_property
| missing()

Returns:

  • bool - if there is not header

header.errors#

| @cached_property
| errors()

Returns:

  • Error[] - header errors

header.valid#

| @cached_property
| valid()

Returns:

  • bool - if header valid

header.to_str#

| to_str()

Returns:

  • str - a row as a CSV string

header.to_list#

| to_list()

Convert to a list

HtmlDialect#

class HtmlDialect(Dialect)

Html dialect representation

APIUsage
Publicfrom frictionless.plugins.html import HtmlDialect

Arguments:

  • descriptor? str|dict - descriptor
  • selector? str - HTML selector

Raises:

  • FrictionlessException - raise any error that occurs during the process

htmlDialect.selector#

| @Metadata.property
| selector()

Returns:

  • str - selector

htmlDialect.expand#

| expand()

Expand metadata

HtmlParser#

class HtmlParser(Parser)

HTML parser implementation.

APIUsage
Publicfrom frictionless.plugins.html import HtmlParser

HtmlPlugin#

class HtmlPlugin(Plugin)

Plugin for HTML

APIUsage
Publicfrom frictionless.plugins.html import HtmlPlugin

InlineDialect#

class InlineDialect(Dialect)

Inline dialect representation

APIUsage
Publicfrom frictionless.plugins.inline import InlineDialect

Arguments:

  • descriptor? str|dict - descriptor
  • keys? str[] - a list of strings to use as data keys
  • keyed? bool - whether data rows are keyed

Raises:

  • FrictionlessException - raise any error that occurs during the process

inlineDialect.keys#

| @Metadata.property
| keys()

Returns:

  • str[]? - keys

inlineDialect.keyed#

| @Metadata.property
| keyed()

Returns:

  • bool - keyed

inlineDialect.expand#

| expand()

Expand metadata

InlineParser#

class InlineParser(Parser)

Inline parser implementation.

APIUsage
Public`from frictionless.plugins.inline import InlineParser

InlinePlugin#

class InlinePlugin(Plugin)

Plugin for Inline

APIUsage
Publicfrom frictionless.plugins.inline import InlinePlugin

Inquiry#

class Inquiry(Metadata)

Inquiry representation.

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

inquiry.tasks#

| @property
| tasks()

Returns:

  • dict[] - tasks

InquiryTask#

class InquiryTask(Metadata)

Inquiry task representation.

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

inquiryTask.source#

| @property
| source()

Returns:

  • any - source

inquiryTask.type#

| @property
| type()

Returns:

  • string? - type

JsonDialect#

class JsonDialect(Dialect)

Json dialect representation

APIUsage
Publicfrom frictionless.plugins.json import JsonDialect

Arguments:

  • descriptor? str|dict - descriptor
  • keys? str[] - a list of strings to use as data keys
  • keyed? bool - whether data rows are keyed
  • property? str - a path within JSON to the data

Raises:

  • FrictionlessException - raise any error that occurs during the process

jsonDialect.keys#

| @Metadata.property
| keys()

Returns:

  • str[]? - keys

jsonDialect.keyed#

| @Metadata.property
| keyed()

Returns:

  • bool - keyed

jsonDialect.property#

| @Metadata.property
| property()

Returns:

  • str? - property

jsonDialect.expand#

| expand()

Expand metadata

JsonParser#

class JsonParser(Parser)

JSON parser implementation.

APIUsage
Public`from frictionless.plugins.json import JsonParser

JsonPlugin#

class JsonPlugin(Plugin)

Plugin for Json

APIUsage
Publicfrom frictionless.plugins.json import JsonPlugin

JsonlParser#

class JsonlParser(Parser)

JSONL parser implementation.

APIUsage
Public`from frictionless.plugins.json import JsonlParser

Layout#

class Layout(Metadata)

Layout representation

APIUsage
Publicfrom frictionless import Layout

Arguments:

  • descriptor? str|dict - layout descriptor pick_fields? ((str|int)[]): what fields to pick skip_fields? ((str|int)[]): what fields to skip
  • limit_fields? int - amount of fields
  • offset_fields? int - from what field to start pick_rows? ((str|int)[]): what rows to pick skip_rows? ((str|int)[]): what rows to skip
  • limit_rows? int - amount of rows
  • offset_rows? int - from what row to start

layout.header#

| @Metadata.property
| header()

Returns:

  • bool - if there is a header row

layout.header_rows#

| @Metadata.property
| header_rows()

Returns:

  • int[] - header rows

layout.header_join#

| @Metadata.property
| header_join()

Returns:

  • str - header joiner

layout.header_case#

| @Metadata.property
| header_case()

Returns:

  • str - header case sensitive

layout.pick_fields#

| @Metadata.property
| pick_fields()

Returns:

  • (str|int)[]? - pick fields

layout.skip_fields#

| @Metadata.property
| skip_fields()

Returns:

  • (str|int)[]? - skip fields

layout.limit_fields#

| @Metadata.property
| limit_fields()

Returns:

  • int? - limit fields

layout.offset_fields#

| @Metadata.property
| offset_fields()

Returns:

  • int? - offset fields

layout.pick_rows#

| @Metadata.property
| pick_rows()

Returns:

  • (str|int)[]? - pick rows

layout.skip_rows#

| @Metadata.property
| skip_rows()

Returns:

  • (str|int)[]? - skip rows

layout.limit_rows#

| @Metadata.property
| limit_rows()

Returns:

  • int? - limit rows

layout.offset_rows#

| @Metadata.property
| offset_rows()

Returns:

  • int? - offset rows

layout.is_field_filtering#

| @Metadata.property(write=False)
| is_field_filtering()

Returns:

  • bool - whether there is a field filtering

layout.pick_fields_compiled#

| @Metadata.property(write=False)
| pick_fields_compiled()

Returns:

  • re? - compiled pick fields

layout.skip_fields_compiled#

| @Metadata.property(write=False)
| skip_fields_compiled()

Returns:

  • re? - compiled skip fields

layout.pick_rows_compiled#

| @Metadata.property(write=False)
| pick_rows_compiled()

Returns:

  • re? - compiled pick rows

layout.skip_rows_compiled#

| @Metadata.property(write=False)
| skip_rows_compiled()

Returns:

  • re? - compiled skip fields

layout.expand#

| expand()

Expand metadata

Loader#

class Loader()

Loader representation

APIUsage
Publicfrom frictionless import Loader

Arguments:

  • resource Resource - resource

loader.resource#

| @property
| resource()

Returns:

  • resource Resource - resource

loader.buffer#

| @property
| buffer()

Returns:

  • Loader - buffer

loader.byte_stream#

| @property
| byte_stream()

Resource byte stream

The stream is available after opening the loader

Returns:

  • io.ByteStream - resource byte stream

loader.text_stream#

| @property
| text_stream()

Resource text stream

The stream is available after opening the loader

Returns:

  • io.TextStream - resource text stream

loader.open#

| open()

Open the loader as "io.open" does

loader.close#

| close()

Close the loader as "filelike.close" does

loader.closed#

| @property
| closed()

Whether the loader is closed

Returns:

  • bool - if closed

loader.read_byte_stream#

| read_byte_stream()

Read bytes stream

Returns:

  • io.ByteStream - resource byte stream

loader.read_byte_stream_create#

| read_byte_stream_create()

Create bytes stream

Returns:

  • io.ByteStream - resource byte stream

loader.read_byte_stream_process#

| read_byte_stream_process(byte_stream)

Process byte stream

Arguments:

  • byte_stream io.ByteStream - resource byte stream

Returns:

  • io.ByteStream - resource byte stream

loader.read_byte_stream_decompress#

| read_byte_stream_decompress(byte_stream)

Decompress byte stream

Arguments:

  • byte_stream io.ByteStream - resource byte stream

Returns:

  • io.ByteStream - resource byte stream

loader.read_byte_stream_buffer#

| read_byte_stream_buffer(byte_stream)

Buffer byte stream

Arguments:

  • byte_stream io.ByteStream - resource byte stream

Returns:

  • bytes - buffer

loader.read_byte_stream_analyze#

| read_byte_stream_analyze(buffer)

Detect metadta using sample

Arguments:

  • buffer bytes - byte buffer

loader.read_text_stream#

| read_text_stream()

Read text stream

Returns:

  • io.TextStream - resource text stream

loader.write_byte_stream#

| write_byte_stream(path)

Write from a temporary file

Arguments:

  • path str - path to a temporary file

Returns:

  • any - result of writing e.g. resulting path

loader.write_byte_stream_create#

| write_byte_stream_create(path)

Create byte stream for writing

Arguments:

  • path str - path to a temporary file

Returns:

  • io.ByteStream - byte stream

loader.write_byte_stream_save#

| write_byte_stream_save(byte_stream)

Store byte stream

LocalControl#

class LocalControl(Control)

Local control representation

APIUsage
Publicfrom frictionless.plugins.local import LocalControl

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

LocalLoader#

class LocalLoader(Loader)

Local loader implementation.

APIUsage
Publicfrom frictionless.plugins.local import LocalLoader

LocalPlugin#

class LocalPlugin(Plugin)

Plugin for Local Data

APIUsage
Publicfrom frictionless.plugins.local import LocalPlugin

Metadata#

class Metadata(helpers.ControlledDict)

Metadata representation

APIUsage
Publicfrom frictionless import Metadata

Arguments:

  • descriptor? str|dict - metadata descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

metadata.setinitial#

| setinitial(key, value)

Set an initial item in a subclass' constructor

Arguments:

  • key str - key
  • value any - value

metadata.to_copy#

| to_copy()

Create a copy of the metadata

Returns:

  • Metadata - a copy of the metadata

metadata.to_dict#

| to_dict()

Convert metadata to a plain dict

Returns:

  • dict - metadata as a plain dict

metadata.to_json#

| to_json(path=None, encoder_class=None)

Save metadata as a json

Arguments:

  • path str - target path

Raises:

  • FrictionlessException - on any error

metadata.to_yaml#

| to_yaml(path=None)

Save metadata as a yaml

Arguments:

  • path str - target path

Raises:

  • FrictionlessException - on any error

metadata.metadata_valid#

| @property
| metadata_valid()

Returns:

  • bool - whether the metadata is valid

metadata.metadata_errors#

| @property
| metadata_errors()

Returns:

  • Errors[] - a list of the metadata errors

metadata.metadata_attach#

| metadata_attach(name, value)

Helper method for attaching a value to the metadata

Arguments:

  • name str - name
  • value any - value

metadata.metadata_extract#

| metadata_extract(descriptor)

Helper method called during the metadata extraction

Arguments:

  • descriptor any - descriptor

metadata.metadata_process#

| metadata_process()

Helper method called on any metadata change

metadata.metadata_validate#

| metadata_validate(profile=None)

Helper method called on any metadata change

Arguments:

  • profile dict - a profile to validate against of

metadata.property#

| @staticmethod
| property(func=None, *, cache=True, reset=True, write=True)

Create a metadata property

Arguments:

  • func func - method
  • cache? bool - cache
  • reset? bool - reset
  • write? func - write

MultipartControl#

class MultipartControl(Control)

Multipart control representation

APIUsage
Publicfrom frictionless.plugins.multipart import MultipartControl

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

multipartControl.expand#

| expand()

Expand metadata

MultipartLoader#

class MultipartLoader(Loader)

Multipart loader implementation.

APIUsage
Publicfrom frictionless.plugins.multipart import MultipartLoader

MultipartPlugin#

class MultipartPlugin(Plugin)

Plugin for Multipart Data

APIUsage
Publicfrom frictionless.plugins.multipart import MultipartPlugin

OdsDialect#

class OdsDialect(Dialect)

Ods dialect representation

APIUsage
Publicfrom frictionless.plugins.ods import OdsDialect

Arguments:

  • descriptor? str|dict - descriptor
  • sheet? str - sheet

Raises:

  • FrictionlessException - raise any error that occurs during the process

odsDialect.sheet#

| @Metadata.property
| sheet()

Returns:

  • int|str - sheet

odsDialect.expand#

| expand()

Expand metadata

OdsParser#

class OdsParser(Parser)

ODS parser implementation.

APIUsage
Publicfrom frictionless.plugins.ods import OdsParser

OdsPlugin#

class OdsPlugin(Plugin)

Plugin for ODS

APIUsage
Publicfrom frictionless.plugins.ods import OdsPlugin

Package#

class Package(Metadata)

Package representation

APIUsage
Publicfrom frictionless import Package

This class is one of the cornerstones of of Frictionless framework. It manages underlaying resource and provides an ability to describe a package.

package = Package(resources=[Resource(path="data/table.csv")])
package.get_resoure('table').read_rows() == [
{'id': 1, 'name': 'english'},
{'id': 2, 'name': '中国人'},
]

Arguments:

  • source any - Source of the package; can be in various forms. Usually, it's a package descriptor in a form of dict or path Also, it can be a glob pattern or a resource path
  • descriptor dict|str - A resource descriptor provided explicitly. Keyword arguments will patch this descriptor if provided.
  • resources? dict|Resource[] - A list of resource descriptors. It can be dicts or Resource instances.
  • id? str - A property reserved for globally unique identifiers. Examples of identifiers that are unique include UUIDs and DOIs.
  • name? str - A short url-usable (and preferably human-readable) name. This MUST be lower-case and contain only alphanumeric characters along with “.”, “_” or “-” characters.
  • title? str - A Package title according to the specs It should a human-oriented title of the resource.
  • description? str - A Package description according to the specs It should a human-oriented description of the resource.
  • licenses? dict[] - The license(s) under which the package is provided. If omitted it's considered the same as the package's licenses.
  • sources? dict[] - The raw sources for this data package. It MUST be an array of Source objects. Each Source object MUST have a title and MAY have path and/or email properties.
  • profile? str - A string identifying the profile of this descriptor. For example, fiscal-data-package.
  • homepage? str - A URL for the home on the web that is related to this package. For example, github repository or ckan dataset address.
  • version? str - A version string identifying the version of the package. It should conform to the Semantic Versioning requirements and should follow the Data Package Version pattern.
  • contributors? dict[] - The people or organizations who contributed to this package. It MUST be an array. Each entry is a Contributor and MUST be an object. A Contributor MUST have a title property and MAY contain path, email, role and organization properties.
  • keywords? str[] - An Array of string keywords to assist users searching. For example, ['data', 'fiscal']
  • image? str - An image to use for this data package. For example, when showing the package in a listing.
  • created? str - The datetime on which this was created. The datetime must conform to the string formats for RFC3339 datetime,
  • innerpath? str - A ZIP datapackage descriptor inner path. Path to the package descriptor inside the ZIP datapackage.
  • Example - some/folder/datapackage.yaml
  • Default - datapackage.json
  • basepath? str - A basepath of the resource The fullpath of the resource is joined basepath and /path`
  • detector? Detector - File/table detector. For more information, please check the Detector documentation.
  • onerror? ignore|warn|raise - Behaviour if there is an error. It defaults to 'ignore'. The default mode will ignore all errors on resource level and they should be handled by the user being available in Header and Row objects.
  • trusted? bool - Don't raise an exception on unsafe paths. A path provided as a part of the descriptor considered unsafe if there are path traversing or the path is absolute. A path provided as source or path is alway trusted.
  • hashing? str - a hashing algorithm for resources It defaults to 'md5'.

Raises:

  • FrictionlessException - raise any error that occurs during the process

package.name#

| @Metadata.property
| name()

Returns:

  • str? - package name

package.id#

| @Metadata.property
| id()

Returns:

  • str? - package id

package.licenses#

| @Metadata.property
| licenses()

Returns:

  • dict? - package licenses

package.profile#

| @Metadata.property
| profile()

Returns:

  • str - package profile

package.title#

| @Metadata.property
| title()

Returns:

  • str? - package title

package.description#

| @Metadata.property
| description()

Returns:

  • str? - package description

package.homepage#

| @Metadata.property
| homepage()

Returns:

  • str? - package homepage

package.version#

| @Metadata.property
| version()

Returns:

  • str? - package version

package.sources#

| @Metadata.property
| sources()

Returns:

  • dict[]? - package sources

package.contributors#

| @Metadata.property
| contributors()

Returns:

  • dict[]? - package contributors

package.keywords#

| @Metadata.property
| keywords()

Returns:

  • str[]? - package keywords

package.image#

| @Metadata.property
| image()

Returns:

  • str? - package image

package.created#

| @Metadata.property
| created()

Returns:

  • str? - package created

package.hashing#

| @Metadata.property(cache=False, write=False)
| hashing()

Returns:

  • str - package hashing

package.basepath#

| @Metadata.property(cache=False, write=False)
| basepath()

Returns:

  • str - package basepath

package.onerror#

| @Metadata.property(cache=False, write=False)
| onerror()

Returns:

  • ignore|warn|raise - on error bahaviour

package.trusted#

| @Metadata.property(cache=False, write=False)
| trusted()

Returns:

  • str - package trusted

package.resources#

| @Metadata.property
| resources()

Returns:

  • Resources[] - package resource

package.resource_names#

| @Metadata.property(cache=False, write=False)
| resource_names()

Returns:

  • str[] - package resource names

package.add_resource#

| add_resource(descriptor)

Add new resource to package.

Arguments:

  • descriptor dict - resource descriptor

Returns:

  • Resource/None - added Resource instance or None if not added

package.get_resource#

| get_resource(name)

Get resource by name.

Arguments:

  • name str - resource name

Raises:

  • FrictionlessException - if resource is not found

Returns:

  • Resource/None - Resource instance or None if not found

package.has_resource#

| has_resource(name)

Check if a resource is present

Arguments:

  • name str - schema resource name

Returns:

  • bool - whether there is the resource

package.remove_resource#

| remove_resource(name)

Remove resource by name.

Arguments:

  • name str - resource name

Raises:

  • FrictionlessException - if resource is not found

Returns:

  • Resource/None - removed Resource instances or None if not found

package.expand#

| expand()

Expand metadata

It will add default values to the package.

package.infer#

| infer(*, stats=False)

Infer package's attributes

Arguments:

  • stats? bool - stream files completely and infer stats

package.to_copy#

| to_copy()

Create a copy of the package

package.from_bigquery#

| @staticmethod
| from_bigquery(source, *, dialect=None)

Import package from Bigquery

Arguments:

  • source string - BigQuery Service object
  • dialect dict - BigQuery dialect

Returns:

  • Package - package

package.to_bigquery#

| to_bigquery(target, *, dialect=None)

Export package to Bigquery

Arguments:

  • target string - BigQuery Service object
  • dialect dict - BigQuery dialect

Returns:

  • BigqueryStorage - storage

package.from_ckan#

| @staticmethod
| from_ckan(source, *, dialect=None)

Import package from CKAN

Arguments:

Returns:

  • Package - package

package.to_ckan#

| to_ckan(target, *, dialect=None)

Export package to CKAN

Arguments:

Returns:

  • CkanStorage - storage

package.from_sql#

| @staticmethod
| from_sql(source, *, dialect=None)

Import package from SQL

Arguments:

  • source any - SQL connection string of engine
  • dialect dict - SQL dialect

Returns:

  • Package - package

package.to_sql#

| to_sql(target, *, dialect=None)

Export package to SQL

Arguments:

  • target any - SQL connection string of engine
  • dialect dict - SQL dialect

Returns:

  • SqlStorage - storage

package.from_zip#

| @staticmethod
| from_zip(path, **options)

Create a package from ZIP

Arguments:

  • path(str) - file path
  • **options(dict) - resouce options

package.to_zip#

| to_zip(path, *, encoder_class=None)

Save package to a zip

Arguments:

  • path str - target path
  • encoder_class object - json encoder class

Raises:

  • FrictionlessException - on any error

PandasDialect#

class PandasDialect(Dialect)

Pandas dialect representation

APIUsage
Publicfrom frictionless.plugins.pandas import PandasDialect

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

PandasParser#

class PandasParser(Parser)

Pandas parser implementation.

APIUsage
Publicfrom frictionless.plugins.pandas import PandasParser

PandasPlugin#

class PandasPlugin(Plugin)

Plugin for Pandas

APIUsage
Publicfrom frictionless.plugins.pandas import PandasPlugin

Parser#

class Parser()

Parser representation

APIUsage
Publicfrom frictionless import Parser

Arguments:

  • resource Resource - resource

parser.resource#

| @property
| resource()

Returns:

  • Resource - resource

parser.loader#

| @property
| loader()

Returns:

  • Loader - loader

parser.sample#

| @property
| sample()

Returns:

  • Loader - sample

parser.list_stream#

| @property
| list_stream()

Yields:

  • any[][] - list stream

parser.open#

| open()

Open the parser as "io.open" does

parser.close#

| close()

Close the parser as "filelike.close" does

parser.closed#

| @property
| closed()

Whether the parser is closed

Returns:

  • bool - if closed

parser.read_loader#

| read_loader()

Create and open loader

Returns:

  • Loader - loader

parser.read_list_stream#

| read_list_stream()

Read list stream

Returns:

  • gen<any[][]> - list stream

parser.read_list_stream_create#

| read_list_stream_create()

Create list stream from loader

Arguments:

  • loader Loader - loader

Returns:

  • gen<any[][]> - list stream

parser.read_list_stream_handle_errors#

| read_list_stream_handle_errors(list_stream)

Wrap list stream into error handler

Arguments:

  • gen<any[][]> - list stream

Returns:

  • gen<any[][]> - list stream

parser.write_row_stream#

| write_row_stream(resource)

Write row stream from the source resource

Arguments:

  • source Resource - source resource

Pipeline#

class Pipeline(Metadata)

Pipeline representation.

Arguments:

  • descriptor? str|dict - pipeline descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

pipeline.tasks#

| @property
| tasks()

Returns:

  • dict[] - tasks

pipeline.run#

| run(*, parallel=False)

Run the pipeline

PipelineTask#

class PipelineTask(Metadata)

Pipeline task representation.

Arguments:

  • descriptor? str|dict - pipeline task descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

pipelineTask.run#

| run()

Run the task

Plugin#

class Plugin()

Plugin representation

APIUsage
Publicfrom frictionless import Plugin

It's an interface for writing Frictionless plugins. You can implement one or more methods to hook into Frictionless system.

plugin.create_check#

| create_check(name, *, descriptor=None)

Create checks

Arguments:

  • name str - check name
  • descriptor dict - check descriptor

Returns:

  • Check - check

plugin.create_control#

| create_control(file, *, descriptor)

Create control

Arguments:

  • file File - control file
  • descriptor dict - control descriptor

Returns:

  • Control - control

plugin.create_dialect#

| create_dialect(file, *, descriptor)

Create dialect

Arguments:

  • file File - dialect file
  • descriptor dict - dialect descriptor

Returns:

  • Dialect - dialect

plugin.create_loader#

| create_loader(file)

Create loader

Arguments:

  • file File - loader file

Returns:

  • Loader - loader

plugin.create_parser#

| create_parser(file)

Create parser

Arguments:

  • file File - parser file

Returns:

  • Parser - parser

plugin.create_server#

| create_server(name)

Create server

Arguments:

  • name str - server name

Returns:

  • Server - server

RemoteControl#

class RemoteControl(Control)

Remote control representation

APIUsage
Publicfrom frictionless.plugins.remote import RemoteControl

Arguments:

  • descriptor? str|dict - descriptor
  • http_session? requests.Session - user defined HTTP session
  • http_preload? bool - don't use HTTP streaming and preload all the data
  • http_timeout? int - user defined HTTP timeout in minutes

Raises:

  • FrictionlessException - raise any error that occurs during the process

remoteControl.http_session#

| @Metadata.property
| http_session()

Returns:

  • requests.Session - HTTP session

remoteControl.http_preload#

| @Metadata.property
| http_preload()

Returns:

  • bool - if not streaming

remoteControl.http_timeout#

| @Metadata.property
| http_timeout()

Returns:

  • int - HTTP timeout in minutes

remoteControl.expand#

| expand()

Expand metadata

RemoteLoader#

class RemoteLoader(Loader)

Remote loader implementation.

APIUsage
Publicfrom frictionless.plugins.remote import RemoteLoader

RemotePlugin#

class RemotePlugin(Plugin)

Plugin for Remote Data

APIUsage
Publicfrom frictionless.plugins.remote import RemotePlugin

Report#

class Report(Metadata)

Report representation.

APIUsage
Publicfrom frictionless import Report

Arguments:

  • descriptor? str|dict - report descriptor
  • time float - validation time
  • errors Error[] - validation errors
  • tasks ReportTask[] - validation tasks

Raises:

  • FrictionlessException - raise any error that occurs during the process

report.version#

| @property
| version()

Returns:

  • str - frictionless version

report.time#

| @property
| time()

Returns:

  • float - validation time

report.valid#

| @property
| valid()

Returns:

  • bool - validation result

report.stats#

| @property
| stats()

Returns:

  • dict - validation stats

report.errors#

| @property
| errors()

Returns:

  • Error[] - validation errors

report.tasks#

| @property
| tasks()

Returns:

  • ReportTask[] - validation tasks

report.task#

| @property
| task()

Returns:

  • ReportTask - validation task (if there is only one)

Raises:

  • FrictionlessException - if there are more that 1 task

report.expand#

| expand()

Expand metadata

report.flatten#

| flatten(spec=["taskPosition", "rowPosition", "fieldPosition", "code"])

Flatten the report

Parameters spec (any[]): flatten specification

Returns:

  • any[] - flatten report

report.from_validate#

| @staticmethod
| from_validate(validate)

Validate function wrapper

Arguments:

  • validate func - validate

Returns:

  • func - wrapped validate

ReportTask#

class ReportTask(Metadata)

Report task representation.

APIUsage
Publicfrom frictionless import ReportTask

Arguments:

  • descriptor? str|dict - schema descriptor

  • time float - validation time

  • scope str[] - validation scope

  • partial bool - wehter validation was partial

  • errors Error[] - validation errors

  • task Task - validation task

    Raises#

  • FrictionlessException - raise any error that occurs during the process

reportTask.resource#

| @property
| resource()

Returns:

  • Resource - resource

reportTask.time#

| @property
| time()

Returns:

  • float - validation time

reportTask.valid#

| @property
| valid()

Returns:

  • bool - validation result

reportTask.scope#

| @property
| scope()

Returns:

  • str[] - validation scope

reportTask.partial#

| @property
| partial()

Returns:

  • bool - if validation partial

reportTask.stats#

| @property
| stats()

Returns:

  • dict - validation stats

reportTask.errors#

| @property
| errors()

Returns:

  • Error[] - validation errors

reportTask.error#

| @property
| error()

Returns:

  • Error - validation error if there is only one

Raises:

  • FrictionlessException - if more than one errors

reportTask.expand#

| expand()

Expand metadata

reportTask.flatten#

| flatten(spec=["rowPosition", "fieldPosition", "code"])

Flatten the report

Parameters spec (any[]): flatten specification

Returns:

  • any[] - flatten task report

Resource#

class Resource(Metadata)

Resource representation.

APIUsage
Publicfrom frictionless import Resource

This class is one of the cornerstones of of Frictionless framework. It loads a data source, and allows you to stream its parsed contents. At the same time, it's a metadata class data description.

with Resource("data/table.csv") as resource:
resource.header == ["id", "name"]
resource.read_rows() == [
{'id': 1, 'name': 'english'},
{'id': 2, 'name': '中国人'},
]

Arguments:

  • source any - Source of the resource; can be in various forms. Usually, it's a string as <scheme>://path/to/file.<format>. It also can be, for example, an array of data arrays/dictionaries. Or it can be a resource descriptor dict or path.
  • descriptor dict|str - A resource descriptor provided explicitly. Keyword arguments will patch this descriptor if provided.
  • name? str - A Resource name according to the specs. It should be a slugified name of the resource.
  • title? str - A Resource title according to the specs It should a human-oriented title of the resource.
  • description? str - A Resource description according to the specs It should a human-oriented description of the resource.
  • mediatype? str - A mediatype/mimetype of the resource e.g. “text/csv”, or “application/vnd.ms-excel”. Mediatypes are maintained by the Internet Assigned Numbers Authority (IANA) in a media type registry.
  • licenses? dict[] - The license(s) under which the resource is provided. If omitted it's considered the same as the package's licenses.
  • sources? dict[] - The raw sources for this data resource. It MUST be an array of Source objects. Each Source object MUST have a title and MAY have path and/or email properties.
  • profile? str - A string identifying the profile of this descriptor. For example, tabular-data-resource.
  • scheme? str - Scheme for loading the file (file, http, ...). If not set, it'll be inferred from source.
  • format? str - File source's format (csv, xls, ...). If not set, it'll be inferred from source.
  • hashing? str - An algorithm to hash data. It defaults to 'md5'.
  • encoding? str - Source encoding. If not set, it'll be inferred from source.
  • innerpath? str - A path within the compressed file. It defaults to the first file in the archive.
  • compression? str - Source file compression (zip, ...). If not set, it'll be inferred from source.
  • control? dict|Control - File control. For more information, please check the Control documentation.
  • dialect? dict|Dialect - Table dialect. For more information, please check the Dialect documentation.
  • layout? dict|Layout - Table layout. For more information, please check the Layout documentation.
  • schema? dict|Schema - Table schema. For more information, please check the Schema documentation.
  • stats? dict - File/table stats. A dict with the following possible properties: hash, bytes, fields, rows.
  • basepath? str - A basepath of the resource The fullpath of the resource is joined basepath and /path`
  • detector? Detector - File/table detector. For more information, please check the Detector documentation.
  • onerror? ignore|warn|raise - Behaviour if there is an error. It defaults to 'ignore'. The default mode will ignore all errors on resource level and they should be handled by the user being available in Header and Row objects.
  • trusted? bool - Don't raise an exception on unsafe paths. A path provided as a part of the descriptor considered unsafe if there are path traversing or the path is absolute. A path provided as source or path is alway trusted.
  • package? Package - A owning this resource package. It's actual if the resource is part of some data package.

Raises:

  • FrictionlessException - raise any error that occurs during the process

resource.name#

| @Metadata.property
| name()

Returns str: resource name

resource.title#

| @Metadata.property
| title()

Returns str: resource title

resource.description#

| @Metadata.property
| description()

Returns str: resource description

resource.mediatype#

| @Metadata.property
| mediatype()

Returns str: resource mediatype

resource.licenses#

| @Metadata.property
| licenses()

Returns dict[]: resource licenses

resource.sources#

| @Metadata.property
| sources()

Returns dict[]: resource sources

resource.profile#

| @Metadata.property
| profile()

Returns str?: resource profile

resource.path#

| @Metadata.property
| path()

Returns str?: resource path

resource.data#

| @Metadata.property
| data()

Returns any[][]?: resource data

resource.scheme#

| @Metadata.property
| scheme()

Returns str?: resource scheme

resource.format#

| @Metadata.property
| format()

Returns str?: resource format

resource.hashing#

| @Metadata.property
| hashing()

Returns str?: resource hashing

resource.encoding#

| @Metadata.property
| encoding()

Returns str?: resource encoding

resource.innerpath#

| @Metadata.property
| innerpath()

Returns str?: resource compression path

resource.compression#

| @Metadata.property
| compression()

Returns str?: resource compression

resource.control#

| @Metadata.property
| control()

Returns Control?: resource control

resource.dialect#

| @Metadata.property
| dialect()

Returns Dialect?: resource dialect

resource.layout#

| @Metadata.property
| layout()

Returns:

  • Layout? - table layout

resource.schema#

| @Metadata.property
| schema()

Returns Schema: resource schema

resource.stats#

| @Metadata.property
| stats()

Returns dict?: resource stats

resource.buffer#

| @property
| buffer()

File's bytes used as a sample

These buffer bytes are used to infer characteristics of the source file (e.g. encoding, ...).

Returns:

  • bytes? - file buffer

resource.sample#

| @property
| sample()

Table's lists used as sample.

These sample rows are used to infer characteristics of the source file (e.g. schema, ...).

Returns:

  • list[]? - table sample

resource.labels#

| @property
| labels()

Returns:

  • str[]? - table labels

resource.fragment#

| @property
| fragment()

Table's lists used as fragment.

These fragment rows are used internally to infer characteristics of the source file (e.g. schema, ...).

Returns:

  • list[]? - table fragment

resource.header#

| @property
| header()

Returns:

  • str[]? - table header

resource.basepath#

| @Metadata.property(cache=False, write=False)
| basepath()

Returns str: resource basepath

resource.fullpath#

| @Metadata.property(cache=False, write=False)
| fullpath()

Returns str: resource fullpath

resource.detector#

| @Metadata.property(cache=False, write=False)
| detector()

Returns str: resource detector

resource.onerror#

| @Metadata.property(cache=False, write=False)
| onerror()

Returns:

  • ignore|warn|raise - on error bahaviour

resource.trusted#

| @Metadata.property(cache=False, write=False)
| trusted()

Returns:

  • bool - don't raise an exception on unsafe paths

resource.package#

| @Metadata.property(cache=False, write=False)
| package()

Returns:

  • Package? - parent package

resource.tabular#

| @Metadata.property(write=False)
| tabular()

Returns bool: if resource is tabular

resource.byte_stream#

| @property
| byte_stream()

Byte stream in form of a generator

Yields:

  • gen<bytes>? - byte stream

resource.text_stream#

| @property
| text_stream()

Text stream in form of a generator

Yields:

  • gen<str[]>? - text stream

resource.list_stream#

| @property
| list_stream()

List stream in form of a generator

Yields:

  • gen<any[][]>? - list stream

resource.row_stream#

| @property
| row_stream()

Row stream in form of a generator of Row objects

Yields:

  • gen<Row[]>? - row stream

resource.expand#

| expand()

Expand metadata

resource.infer#

| infer(*, stats=False)

Infer metadata

Arguments:

  • stats? bool - stream file completely and infer stats

resource.open#

| open()

Open the resource as "io.open" does

Raises:

  • FrictionlessException - any exception that occurs

resource.close#

| close()

Close the table as "filelike.close" does

resource.closed#

| @property
| closed()

Whether the table is closed

Returns:

  • bool - if closed

resource.read_bytes#

| read_bytes(*, size=None)

Read bytes into memory

Returns:

  • any[][] - resource bytes

resource.read_text#

| read_text(*, size=None)

Read text into memory

Returns:

  • str - resource text

resource.read_data#

| read_data(*, size=None)

Read data into memory

Returns:

  • any - resource data

resource.read_lists#

| read_lists(*, size=None)

Read lists into memory

Returns:

  • any[][] - table lists

resource.read_rows#

| read_rows(*, size=None)

Read rows into memory

Returns:

  • Row[] - table rows

resource.write#

| write(target=None, **options)

Write this resource to the target resource

Arguments:

  • target any|Resource - target or target resource instance
  • **options dict - Resource constructor options

resource.to_dict#

| to_dict()

Create a dict from the resource

Returns dict: dict representation

resource.to_copy#

| to_copy(**options)

Create a copy from the resource

Returns Resource: resource copy

resource.to_view#

| to_view(type="look", **options)

Create a view from the resource

See PETL's docs for more information: https://petl.readthedocs.io/en/stable/util.html#visualising-tables

Arguments:

  • type look|lookall|see|display|displayall - view's type

  • **options dict - options to be passed to PETL

    Returns

  • str - resource's view

resource.to_snap#

| to_snap(*, json=False)

Create a snapshot from the resource

Arguments:

  • json bool - make data types compatible with JSON format

    Returns

  • list - resource's data

resource.to_inline#

| to_inline(*, dialect=None)

Helper to export resource as an inline data

resource.to_pandas#

| to_pandas(*, dialect=None)

Helper to export resource as an Pandas dataframe

resource.from_petl#

| @staticmethod
| from_petl(view, **options)

Create a resource from PETL view

resource.to_petl#

| to_petl(normalize=False)

Export resource as a PETL table

Row#

class Row(dict)

Row representation

APIUsage
Publicfrom frictionless import Row

Constructor of this object is not Public API

This object is returned by extract, resource.read_rows, and other functions.

rows = extract("data/table.csv")
for row in rows:
# work with the Row

Arguments:

  • cells any[] - array of cells
  • field_info dict - special field info structure
  • row_position int - row position from 1
  • row_number int - row number from 1

row.cells#

| @cached_property
| cells()

Returns:

  • Field[] - table schema fields

row.fields#

| @cached_property
| fields()

Returns:

  • Field[] - table schema fields

row.field_names#

| @cached_property
| field_names()

Returns:

  • Schema - table schema

row.field_positions#

| @cached_property
| field_positions()

Returns:

  • int[] - table field positions

row.row_position#

| @cached_property
| row_position()

Returns:

  • int - row position from 1

row.row_number#

| @cached_property
| row_number()

Returns:

  • int - row number from 1

row.blank_cells#

| @cached_property
| blank_cells()

A mapping indexed by a field name with blank cells before parsing

Returns:

  • dict - row blank cells

row.error_cells#

| @cached_property
| error_cells()

A mapping indexed by a field name with error cells before parsing

Returns:

  • dict - row error cells

row.errors#

| @cached_property
| errors()

Returns:

  • Error[] - row errors

row.valid#

| @cached_property
| valid()

Returns:

  • bool - if row valid

row.to_str#

| to_str()

Returns:

  • str - a row as a CSV string

row.to_list#

| to_list(*, json=False, types=None)

Arguments:

  • json bool - make data types compatible with JSON format
  • types str[] - list of supported types

Returns:

  • dict - a row as a list

row.to_dict#

| to_dict(*, json=False, types=None)

Arguments:

  • json bool - make data types compatible with JSON format

Returns:

  • dict - a row as a dictionary

S3Control#

class S3Control(Control)

S3 control representation

APIUsage
Publicfrom frictionless.plugins.s3 import S3Control

Arguments:

  • descriptor? str|dict - descriptor
  • endpoint_url? string - endpoint url

Raises:

  • FrictionlessException - raise any error that occurs during the process

s3Control.expand#

| expand()

Expand metadata

S3Loader#

class S3Loader(Loader)

S3 loader implementation.

APIUsage
Publicfrom frictionless.plugins.s3 import S3Loader

S3Plugin#

class S3Plugin(Plugin)

Plugin for S3

APIUsage
Publicfrom frictionless.plugins.s3 import S3Plugin

Schema#

class Schema(Metadata)

Schema representation

APIUsage
Publicfrom frictionless import Schema

This class is one of the cornerstones of of Frictionless framework. It allow to work with Table Schema and its fields.

schema = Schema('schema.json')
schema.add_fied(Field(name='name', type='string'))

Arguments:

  • descriptor? str|dict - schema descriptor
  • fields? dict[] - list of field descriptors
  • missing_values? str[] - missing values
  • primary_key? str[] - primary key
  • foreign_keys? dict[] - foreign keys

Raises:

  • FrictionlessException - raise any error that occurs during the process

schema.missing_values#

| @Metadata.property
| missing_values()

Returns:

  • str[] - missing values

schema.primary_key#

| @Metadata.property
| primary_key()

Returns:

  • str[] - primary key field names

schema.foreign_keys#

| @Metadata.property
| foreign_keys()

Returns:

  • dict[] - foreign keys

schema.fields#

| @Metadata.property
| fields()

Returns:

  • Field[] - an array of field instances

schema.field_names#

| @Metadata.property(cache=False, write=False)
| field_names()

Returns:

  • str[] - an array of field names

schema.add_field#

| add_field(descriptor)

Add new field to schema.

The schema descriptor will be validated with newly added field descriptor.

Arguments:

  • descriptor dict - field descriptor

Returns:

  • Field/None - added Field instance or None if not added

schema.get_field#

| get_field(name)

Get schema's field by name.

Arguments:

  • name str - schema field name

Raises:

  • FrictionlessException - if field is not found

Returns:

  • Field - Field instance or None if not found

schema.has_field#

| has_field(name)

Check if a field is present

Arguments:

  • name str - schema field name

Returns:

  • bool - whether there is the field

schema.remove_field#

| remove_field(name)

Remove field by name.

The schema descriptor will be validated after field descriptor removal.

Arguments:

  • name str - schema field name

Raises:

  • FrictionlessException - if field is not found

Returns:

  • Field/None - removed Field instances or None if not found

schema.expand#

| expand()

Expand the schema

schema.read_cells#

| read_cells(cells)

Read a list of cells (normalize/cast)

Arguments:

  • cells any[] - list of cells

Returns:

  • any[] - list of processed cells

schema.write_cells#

| write_cells(cells, *, types=[])

Write a list of cells (normalize/uncast)

Arguments:

  • cells any[] - list of cells

Returns:

  • any[] - list of processed cells

schema.from_jsonschema#

| @staticmethod
| from_jsonschema(profile)

Create a Schema from JSONSchema profile

Arguments:

  • profile str|dict - path or dict with JSONSchema profile

Returns:

  • Schema - schema instance

Server#

class Server()

Server representation

APIUsage
Publicfrom frictionless import Schema

server.start#

| start(port)

Start the server

Arguments:

  • port int - HTTP port

server.stop#

| stop()

Stop the server

ServerPlugin#

class ServerPlugin(Plugin)

Plugin for Server

APIUsage
Publicfrom frictionless.plugins.server import ServerPlugin

SpssDialect#

class SpssDialect(Dialect)

Spss dialect representation

APIUsage
Publicfrom frictionless.plugins.spss import SpssDialect

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

SpssParser#

class SpssParser(Parser)

Spss parser implementation.

APIUsage
Publicfrom frictionless.plugins.spss import SpssParser

SpssPlugin#

class SpssPlugin(Plugin)

Plugin for SPSS

APIUsage
Publicfrom frictionless.plugins.spss import SpssPlugin

SqlDialect#

class SqlDialect(Dialect)

SQL dialect representation

APIUsage
Publicfrom frictionless.plugins.sql import SqlDialect

Arguments:

  • descriptor? str|dict - descriptor
  • table str - table name
  • prefix str - prefix for all table names
  • order_by? str - order_by statement passed to SQL
  • namespace? str - SQL schema

Raises:

  • FrictionlessException - raise any error that occurs during the process

SqlParser#

class SqlParser(Parser)

SQL parser implementation.

APIUsage
Publicfrom frictionless.plugins.sql import SqlParser

SqlPlugin#

class SqlPlugin(Plugin)

Plugin for SQL

APIUsage
Publicfrom frictionless.plugins.sql import SqlPlugin

SqlStorage#

class SqlStorage(Storage)

SQL storage implementation

APIUsage
Publicfrom frictionless.plugins.sql import SqlStorage

Arguments:

  • url? string - SQL connection string
  • engine? object - sqlalchemy engine
  • prefix? str - prefix for all tables
  • namespace? str - SQL scheme

Status#

class Status(Metadata)

Status representation.

Arguments:

  • descriptor? str|dict - schema descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

status.version#

| @property
| version()

Returns:

  • str - frictionless version

status.time#

| @property
| time()

Returns:

  • float - transformation time

status.valid#

| @property
| valid()

Returns:

  • bool - transformation result

status.stats#

| @property
| stats()

Returns:

  • dict - transformation stats

status.errors#

| @property
| errors()

Returns:

  • Error[] - transformation errors

status.tasks#

| @property
| tasks()

Returns:

  • ReportTable[] - transformation tasks

status.task#

| @property
| task()

Returns:

  • ReportTable - transformation task (if there is only one)

Raises:

  • FrictionlessException - if there are more that 1 task

StatusTask#

class StatusTask(Metadata)

Status Task representation

statusTask.time#

| @property
| time()

Returns:

  • dict - transformation time

statusTask.valid#

| @property
| valid()

Returns:

  • bool - transformation result

statusTask.stats#

| @property
| stats()

Returns:

  • dict - transformation stats

statusTask.errors#

| @property
| errors()

Returns:

  • Error[] - transformation errors

statusTask.target#

| @property
| target()

Returns:

  • any - transformation target

statusTask.type#

| @property
| type()

Returns:

  • any - transformation target

Step#

class Step(Metadata)

Step representation

step.transform_resource#

| transform_resource(resource)

Transform resource

Arguments:

  • resource Resource - resource

Returns:

  • resource Resource - resource

step.transform_package#

| transform_package(resource)

Transform package

Arguments:

  • package Package - package

Returns:

  • package Package - package

StreamControl#

class StreamControl(Control)

Stream control representation

APIUsage
Publicfrom frictionless.plugins.stream import StreamControl

Arguments:

  • descriptor? str|dict - descriptor

Raises:

  • FrictionlessException - raise any error that occurs during the process

StreamLoader#

class StreamLoader(Loader)

Stream loader implementation.

APIUsage
Publicfrom frictionless.plugins.stream import StreamLoader

StreamPlugin#

class StreamPlugin(Plugin)

Plugin for Local Data

APIUsage
Publicfrom frictionless.plugins.stream import StreamPlugin

System#

class System()

System representation

APIUsage
Publicfrom frictionless import system

This class provides an ability to make system Frictionless calls. It's available as frictionless.system singletone.

system.register#

| register(name, plugin)

Register a plugin

Arguments:

  • name str - plugin name
  • plugin Plugin - plugin to register

system.create_check#

| create_check(descriptor)

Create checks

Arguments:

  • descriptor dict - check descriptor

Returns:

  • Check - check

system.create_control#

| create_control(resource, *, descriptor)

Create control

Arguments:

  • resource Resource - control resource
  • descriptor dict - control descriptor

Returns:

  • Control - control

system.create_dialect#

| create_dialect(resource, *, descriptor)

Create dialect

Arguments:

  • resource Resource - dialect resource
  • descriptor dict - dialect descriptor

Returns:

  • Dialect - dialect

system.create_error#

| create_error(descriptor)

Create errors

Arguments:

  • descriptor dict - error descriptor

Returns:

  • Error - error

system.create_file#

| create_file(source, **options)

Create file

Arguments:

  • source any - file source
  • options dict - file options

Returns:

  • File - file

system.create_loader#

| create_loader(resource)

Create loader

Arguments:

  • resource Resource - loader resource

Returns:

  • Loader - loader

system.create_parser#

| create_parser(resource)

Create parser

Arguments:

  • resource Resource - parser resource

Returns:

  • Parser - parser

system.create_server#

| create_server(name, **options)

Create server

Arguments:

  • name str - server name
  • options str - server options

Returns:

  • Server - server

system.create_step#

| create_step(descriptor)

Create steps

Arguments:

  • descriptor dict - step descriptor

Returns:

  • Step - step

system.create_storage#

| create_storage(name, source, **options)

Create storage

Arguments:

  • name str - storage name
  • options str - storage options

Returns:

  • Storage - storage

system.create_type#

| create_type(field)

Create checks

Arguments:

  • field Field - corresponding field

Returns:

  • Type - type

Type#

class Type()

Data type representation

APIUsage
Publicfrom frictionless import Type

This class is for subclassing.

Arguments:

  • field Field - field

type.constraints#

Returns:

  • str[] - a list of supported constraints

type.field#

| @cached_property
| field()

Returns:

  • Field - field

type.read_cell#

| read_cell(cell)

Convert cell (read direction)

Arguments:

  • cell any - cell to covert

Returns:

  • any - converted cell

type.write_cell#

| write_cell(cell)

Convert cell (write direction)

Arguments:

  • cell any - cell to covert

Returns:

  • any - converted cell

XlsParser#

class XlsParser(Parser)

XLS parser implementation.

APIUsage
Public`from frictionless.plugins.excel import XlsParser

XlsxParser#

class XlsxParser(Parser)

XLSX parser implementation.

APIUsage
Public`from frictionless.plugins.excel import XlsxParser

checks.baseline#

class baseline(Check)

Check a table for basic errors

APIUsage
Publicfrom frictionless import checks
Implicitvalidate(...)

Ths check is enabled by default for any validate function run.

checks.deviated_value#

class deviated_value(Check)

Check for deviated values in a field

APIUsage
Publicfrom frictionless import checks
Implicitvalidate(checks=([{"code": "deviated-value", **descriptor}])

This check can be enabled using the checks parameter for the validate function.

Arguments:

  • descriptor dict - check's descriptor
  • field_name str - a field name to check
  • average? str - one of "mean", "median" or "mode" (default: "mean")
  • interval? str - statistical interval (default: 3)

checks.duplicate_row#

class duplicate_row(Check)

Check for duplicate rows

APIUsage
Publicfrom frictionless import checks
Implicitvalidate(checks=[{"code": "duplicate-row"}])

This check can be enabled using the checks parameter for the validate function.

checks.forbidden_value#

class forbidden_value(Check)

Check for forbidden values in a field

APIUsage
Publicfrom frictionless import checks
Implicitvalidate(checks=[{"code": "backlisted-value", **descriptor}])

This check can be enabled using the checks parameter for the validate function.

Arguments:

  • descriptor dict - check's descriptor
  • field_name str - a field name to look into
  • forbidden any[] - a list of forbidden values

checks.row_constraint#

class row_constraint(Check)

Check that every row satisfies a provided Python expression

APIUsage
Publicfrom frictionless import checks
Implicitvalidate(checks=([{"code": "row-constraint", **descriptor}])

This check can be enabled using the checks parameter for the validate function. The syntax for the row constraint check can be found here - https://github.com/danthedeckie/simpleeval

Arguments:

  • descriptor dict - check's descriptor
  • formula str - a python expression to evaluate against a row

checks.sequential_value#

class sequential_value(Check)

Check that a column having sequential values

APIUsage
Publicfrom frictionless import checks
Implicitvalidate(checks=[{"code": "sequential-value", **descriptor}])

This check can be enabled using the checks parameter for the validate function.

Arguments:

  • descriptor dict - check's descriptor
  • field_name str - a field name to check

checks.truncated_value#

class truncated_value(Check)

Check for possible truncated values

APIUsage
Publicfrom frictionless import checks
Implicitvalidate(checks=([{"code": "truncated-value"}])

This check can be enabled using the checks parameter for the validate function.

describe#

describe(source=None, *, type=None, **options)

Describe the data source

APIUsage
Publicfrom frictionless import describe

Arguments:

  • source any - data source
  • type str - source type - schema, resource or package (default: infer)
  • **options dict - options for the underlaying describe function

Returns:

  • Package|Resource|Schema - metadata

describe_dialect#

describe_dialect(source=None, **options)

Describe the given source as a dialect

APIUsage
Publicfrom frictionless import describe_dialect

Arguments:

  • source any - data source
  • **options dict - describe resource options

Returns:

  • Dialect - file dialect

describe_package#

describe_package(source=None, *, expand=False, stats=False, **options)

Describe the given source as a package

APIUsage
Publicfrom frictionless import describe_package

Arguments:

  • source any - data source
  • expand? bool - if True it will expand the metadata
  • stats? bool - if True infer resource's stats
  • **options dict - Package constructor options

Returns:

  • Package - data package

describe_resource#

describe_resource(source=None, *, expand=False, stats=False, **options)

Describe the given source as a resource

APIUsage
Publicfrom frictionless import describe_resource

Arguments:

  • source any - data source
  • expand? bool - if True it will expand the metadata
  • stats? bool - if True infer resource's stats
  • **options dict - Resource constructor options

Returns:

  • Resource - data resource

describe_schema#

describe_schema(source=None, **options)

Describe the given source as a schema

APIUsage
Publicfrom frictionless import describe_schema

Arguments:

  • source any - data source
  • **options dict - describe resource options

Returns:

  • Schema - table schema

errors.CellError#

class CellError(RowError)

Cell error representation

Arguments:

  • descriptor? str|dict - error descriptor

  • note str - an error note

  • cells str[] - row cells

  • row_number int - row number

  • row_position int - row position

  • cell str - errored cell

  • field_name str - field name

  • field_number int - field number

  • field_position int - field position

    Raises#

  • FrictionlessException - raise any error that occurs during the process

errors.CellError.from_row#

| @classmethod
| from_row(cls, row, *, note, field_name)

Create and error from a cell

Arguments:

  • row Row - row
  • note str - note
  • field_name str - field name

Returns:

  • CellError - error

errors.HeaderError#

class HeaderError(TableError)

Header error representation

Arguments:

  • descriptor? str|dict - error descriptor
  • note str - an error note
  • labels str[] - header labels
  • label str - an errored label
  • field_name str - field name
  • field_number int - field number
  • field_position int - field position

Raises:

  • FrictionlessException - raise any error that occurs during the process

errors.LabelError#

class LabelError(HeaderError)

Label error representation

Arguments:

  • descriptor? str|dict - error descriptor
  • note str - an error note
  • labels str[] - header labels
  • label str - an errored label
  • field_name str - field name
  • field_number int - field number
  • field_position int - field position

Raises:

  • FrictionlessException - raise any error that occurs during the process

errors.RowError#

class RowError(TableError)

Row error representation

Arguments:

  • descriptor? str|dict - error descriptor
  • note str - an error note
  • row_number int - row number
  • row_position int - row position

Raises:

  • FrictionlessException - raise any error that occurs during the process

errors.RowError.from_row#

| @classmethod
| from_row(cls, row, *, note)

Create an error from a row

Arguments:

  • row Row - row
  • note str - note

Returns:

  • RowError - error

extract#

extract(source=None, *, type=None, process=None, stream=False, **options)

Extract resource rows

APIUsage
Publicfrom frictionless import extract

Arguments:

  • source dict|str - data source
  • type str - source type - package of resource (default: infer)
  • process? func - a row processor function
  • stream? bool - return a row stream(s) instead of loading into memory
  • **options dict - options for the underlaying function

Returns:

  • Row[]|{path - Row[]}: rows in a form depending on the source type

extract_package#

extract_package(source=None, *, process=None, stream=False, **options)

Extract package rows

APIUsage
Publicfrom frictionless import extract_package

Arguments:

  • source dict|str - data resource descriptor
  • process? func - a row processor function
  • stream? bool - return a row streams instead of loading into memory
  • **options dict - Package constructor options

Returns:

  • {path - Row[]}: a dictionary of arrays/streams of rows

extract_resource#

extract_resource(source=None, *, process=None, stream=False, **options)

Extract resource rows

APIUsage
Publicfrom frictionless import extract_resource

Arguments:

  • source any|Resource - data resource
  • process? func - a row processor function
  • **options dict - Resource constructor options

Returns:

  • Row[] - an array/stream of rows

steps.cell_convert#

class cell_convert(Step)

Convert cell

steps.cell_fill#

class cell_fill(Step)

Fill cell

steps.cell_format#

class cell_format(Step)

Format cell

steps.cell_interpolate#

class cell_interpolate(Step)

Interpolate cell

steps.cell_replace#

class cell_replace(Step)

Replace cell

steps.cell_set#

class cell_set(Step)

Set cell

steps.field_add#

class field_add(Step)

Add field

steps.field_filter#

class field_filter(Step)

Filter fields

steps.field_move#

class field_move(Step)

Move field

steps.field_remove#

class field_remove(Step)

Remove field

steps.field_split#

class field_split(Step)

Split field

steps.field_unpack#

class field_unpack(Step)

Unpack field

steps.field_update#

class field_update(Step)

Update field

steps.resource_add#

class resource_add(Step)

Add resource

steps.resource_remove#

class resource_remove(Step)

Remove resource

steps.resource_transform#

class resource_transform(Step)

Transform resource

steps.resource_update#

class resource_update(Step)

Update resource

steps.row_filter#

class row_filter(Step)

Filter rows

steps.row_search#

class row_search(Step)

Search rows

steps.row_slice#

class row_slice(Step)

Slice rows

steps.row_sort#

class row_sort(Step)

Sort rows

steps.row_split#

class row_split(Step)

Split rows

steps.row_subset#

class row_subset(Step)

Subset rows

steps.row_ungroup#

class row_ungroup(Step)

Ungroup rows

steps.table_aggregate#

class table_aggregate(Step)

Aggregate table

steps.table_attach#

class table_attach(Step)

Attach table

steps.table_debug#

class table_debug(Step)

Debug table

steps.table_diff#

class table_diff(Step)

Diff tables

steps.table_intersect#

class table_intersect(Step)

Intersect tables

steps.table_join#

class table_join(Step)

Join tables

steps.table_melt#

class table_melt(Step)

Melt tables

steps.table_merge#

class table_merge(Step)

Merge tables

steps.table_normalize#

class table_normalize(Step)

Normalize table

steps.table_pivot#

class table_pivot(Step)

Pivot table

steps.table_print#

class table_print(Step)

Print table

steps.table_recast#

class table_recast(Step)

Recast table

steps.table_transpose#

class table_transpose(Step)

Transpose table

steps.table_validate#

class table_validate(Step)

Validate table

steps.table_write#

class table_write(Step)

Write table

transform#

transform(source=None, type=None, **options)

Transform resource

APIUsage
Publicfrom frictionless import transform

Arguments:

  • source any - data source
  • type str - source type - package, resource or pipeline (default: infer)
  • **options dict - options for the underlaying function

Returns:

  • any - the transform result

transform_package#

transform_package(source=None, *, steps, **options)

Transform package

APIUsage
Publicfrom frictionless import transform_package

Arguments:

  • source any - data source
  • steps Step[] - transform steps
  • **options dict - Package constructor options

Returns:

  • Package - the transform result

transform_pipeline#

transform_pipeline(source=None, *, parallel=False, **options)

Transform package

APIUsage
Publicfrom frictionless import transform_package

Arguments:

  • source any - a pipeline descriptor
  • **options dict - Pipeline constructor options

Returns:

  • any - the pipeline output

transform_resource#

transform_resource(source=None, *, steps, **options)

Transform resource

APIUsage
Publicfrom frictionless import transform_resource

Arguments:

  • source any - data source
  • steps Step[] - transform steps
  • **options dict - Package constructor options

Returns:

  • Resource - the transform result

types.AnyType#

class AnyType(Type)

Any type implementation.

APIUsage
Publicfrom frictionless import types

types.ArrayType#

class ArrayType(Type)

Array type implementation.

APIUsage
Publicfrom frictionless import types

types.BooleanType#

class BooleanType(Type)

Boolean type implementation.

APIUsage
Publicfrom frictionless import types

types.DateType#

class DateType(Type)

Date type implementation.

APIUsage
Publicfrom frictionless import types

types.DatetimeType#

class DatetimeType(Type)

Datetime type implementation.

APIUsage
Publicfrom frictionless import types

types.DurationType#

class DurationType(Type)

Duration type implementation.

APIUsage
Publicfrom frictionless import types

types.GeojsonType#

class GeojsonType(Type)

Geojson type implementation.

APIUsage
Publicfrom frictionless import types

types.GeopointType#

class GeopointType(Type)

Geopoint type implementation.

APIUsage
Publicfrom frictionless import types

types.IntegerType#

class IntegerType(Type)

Integer type implementation.

APIUsage
Publicfrom frictionless import types

types.NumberType#

class NumberType(Type)

Number type implementation.

APIUsage
Publicfrom frictionless import types

types.ObjectType#

class ObjectType(Type)

Object type implementation.

APIUsage
Publicfrom frictionless import types

types.StringType#

class StringType(Type)

String type implementation.

APIUsage
Publicfrom frictionless import types

types.TimeType#

class TimeType(Type)

Time type implementation.

APIUsage
Publicfrom frictionless import types

types.YearType#

class YearType(Type)

Year type implementation.

APIUsage
Publicfrom frictionless import types

types.YearmonthType#

class YearmonthType(Type)

Yearmonth type implementation.

APIUsage
Publicfrom frictionless import types

validate#

@Report.from_validate
validate(source=None, type=None, **options)

Validate resource

APIUsage
Publicfrom frictionless import validate

Arguments:

  • source dict|str - a data source
  • type str - source type - inquiry, package, resource, schema or table
  • **options dict - options for the underlaying function

Returns:

  • Report - validation report

validate_inquiry#

@Report.from_validate
validate_inquiry(source=None, *, parallel=False, **options)

Validate inquiry

APIUsage
Publicfrom frictionless import validate_inquiry

Arguments:

  • source dict|str - an inquiry descriptor
  • parallel? bool - enable multiprocessing

Returns:

  • Report - validation report

validate_package#

@Report.from_validate
validate_package(source=None, original=False, parallel=False, **options)

Validate package

APIUsage
Publicfrom frictionless import validate_package

Arguments:

  • source dict|str - a package descriptor
  • basepath? str - package basepath
  • trusted? bool - don't raise an exception on unsafe paths
  • original? bool - don't call package.infer
  • parallel? bool - enable multiprocessing
  • **options dict - Package constructor options

Returns:

  • Report - validation report

validate_resource#

@Report.from_validate
validate_resource(source=None, *, checks=None, original=False, pick_errors=None, skip_errors=None, limit_errors=config.DEFAULT_LIMIT_ERRORS, limit_memory=config.DEFAULT_LIMIT_MEMORY, **options, ,)

Validate table

APIUsage
Publicfrom frictionless import validate_table

Arguments:

  • source any - the source of the resource
  • checks? list - a list of checks pick_errors? ((str|int)[]): pick errors skip_errors? ((str|int)[]): skip errors
  • limit_errors? int - limit errors
  • limit_memory? int - limit memory
  • original? bool - validate resource as it is
  • **options? dict - Resource constructor options

Returns:

  • Report - validation report

validate_schema#

@Report.from_validate
validate_schema(source=None, **options)

Validate schema

APIUsage
Publicfrom frictionless import validate_schema

Arguments:

  • source dict|str - a schema descriptor

Returns:

  • Report - validation report
Last updated on by roll