* WIP * WIP - make test_model_definition tests pass * WIP - make test_model_methods pass * WIP - make whole test suit at least run - failing 49/443 tests * WIP fix part of the getting pydantic tests as types of fields are now kept in core schema and not on fieldsinfo * WIP fix validation in update by creating individual fields validators, failing 36/443 * WIP fix __pydantic_extra__ in intializing model, fix test related to pydantic config checks, failing 32/442 * WIP - fix enum schema in model_json_schema, failing 31/442 * WIP - fix copying through model, fix setting pydantic fields on through, fix default config and inheriting from it, failing 26/442 * WIP fix tests checking pydantic schema, fix excluding parent fields, failing 21/442 * WIP some missed files * WIP - fix validators inheritance and fix validators in generated pydantic, failing 17/442 * WIP - fix through models setting - only on reverse side of relation, but always on reverse side, failing 15/442 * WIP - fix through models setting - only on reverse side of relation, but always on reverse side, failing 15/442 * WIP - working on proper populating __dict__ for relations for new schema dumping, some work on openapi docs, failing 13/442 * WIP - remove property fields as pydantic has now computed_field on its own, failing 9/442 * WIP - fixes in docs, failing 8/442 * WIP - fix tests for largebinary schema, wrapped bytes fields fail in pydantic, will be fixed in pydantic-core, remaining is circural schema for related models, failing 6/442 * WIP - fix to pk only models in schemas * Getting test suites to pass (#1249) * wip, fixing tests * iteration, fixing some more tests * iteration, fixing some more tests * adhere to comments * adhere to comments * remove unnecessary dict call, re-add getattribute for testing * todo for reverse relationship * adhere to comments, remove prints * solve circular refs * all tests pass 🎉 * remove 3.7 from tests * add lint and type check jobs * reforat with ruff, fix jobs * rename jobs * fix imports * fix evaluate in py3.8 * partially fix coverage * fix coverage, add more tests * fix test ids * fix test ids * fix lint, fix docs, make docs fully working scripts, add test docs job * fix pyproject * pin py ver in test docs * change dir in test docs * fix pydantic warning hack * rm poetry call in test_docs * switch to pathlib in test docs * remove coverage req test docs * fix type check tests, fix part of types * fix/skip next part of types * fix next part of types * fix next part of types * fix coverage * fix coverage * fix type (bit dirty 🤷) * fix some code smells * change pre-commit * tweak workflows * remove no root from tests * switch to full python path by passing sys.executable * some small refactor in new base model, one sample test, change makefile * small refactors to reduce complexity of methods * temp add tests for prs against pydantic_v2 * remove all references to __fields__ * remove all references to construct, deprecate the method and update model_construct to be in line with pydantic * deprecate dict and add model_dump, todo switch to model_dict in calls * fix tests * change to union * change to union * change to model_dump and model_dump_json from dict and json deprecated methods, deprecate them in ormar too * finish switching dict() -> model_dump() * finish switching json() -> model_dump_json() * remove fully pydantic_only * switch to extra for payment card, change missed json calls * fix coverage - no more warnings internal * fix coverage - no more warnings internal - part 2 * split model_construct into own and pydantic parts * split determine pydantic field type * change to new field validators * fix benchmarks, add codspeed instead of pytest-benchmark, add action and gh workflow * restore pytest-benchmark * remove codspeed * pin pydantic version, restore codspeed * change on push to pydantic_v2 to trigger first one * Use lifespan function instead of event (#1259) * check return types * fix imports order, set warnings=False on json that passes the dict, fix unnecessary loop in one of the test * remove references to model's meta as it's now ormar config, rename related methods too * filter out pydantic serializer warnings * remove choices leftovers * remove leftovers after property_fields, keep only enough to exclude them in initialization * add migration guide * fix meta references * downgrade databases for now * Change line numbers in documentation (#1265) * proofread and fix the docs, part 1 * proofread and fix the docs for models * proofread and fix the docs for fields * proofread and fix the docs for relations * proofread and fix rest of the docs, add release notes for 0.20 * create tables in new docs src * cleanup old deps, uncomment docs publish on tag * fix import reorder --------- Co-authored-by: TouwaStar <30479449+TouwaStar@users.noreply.github.com> Co-authored-by: Goran Mekić <meka@tilda.center>
ormar
Overview
The ormar package is an async mini ORM for Python, with support for Postgres,
MySQL, and SQLite.
The main benefits of using ormar are:
- getting an async ORM that can be used with async frameworks (fastapi, starlette etc.)
- getting just one model to maintain - you don't have to maintain pydantic and other orm models (sqlalchemy, peewee, gino etc.)
The goal was to create a simple ORM that can be used directly (as request and response models) with fastapi that bases it's data validation on pydantic.
Ormar - apart from the obvious "ORM" in name - gets its name from ormar in Swedish which means snakes, and ormar in Croatian which means cabinet.
And what's a better name for python ORM than snakes cabinet :)
If you like ormar remember to star the repository in github!
The bigger community we build, the easier it will be to catch bugs and attract contributors ;)
Documentation
Check out the documentation for details.
Note that for brevity most of the documentation snippets omit the creation of the database and scheduling the execution of functions for asynchronous run.
If you want more real life examples than in the documentation you can see the tests folder, since they actually have to create and connect to a database in most of the tests.
Yet remember that those are - well - tests and not all solutions are suitable to be used in real life applications.
Part of the fastapi ecosystem
As part of the fastapi ecosystem ormar is supported in libraries that somehow work with databases.
As of now ormar is supported by:
If you maintain or use a different library and would like it to support ormar let us know how we can help.
Dependencies
Ormar is built with:
sqlalchemy corefor query building.databasesfor cross-database async support.pydanticfor data validation.typing_extensionsfor python 3.6 - 3.7
License
ormar is built as open-sorce software and will remain completely free (MIT license).
As I write open-source code to solve everyday problems in my work or to promote and build strong python community you can say thank you and buy me a coffee or sponsor me with a monthly amount to help ensure my work remains free and maintained.
Migrating from sqlalchemy and existing databases
If you currently use sqlalchemy and would like to switch to ormar check out the auto-translation
tool that can help you with translating existing sqlalchemy orm models so you do not have to do it manually.
Beta versions available at github: sqlalchemy-to-ormar
or simply pip install sqlalchemy-to-ormar
sqlalchemy-to-ormar can be used in pair with sqlacodegen to auto-map/ generate ormar models from existing database, even if you don't use sqlalchemy for your project.
Migrations & Database creation
Because ormar is built on SQLAlchemy core, you can use alembic to provide
database migrations (and you really should for production code).
For tests and basic applications the sqlalchemy is more than enough:
# note this is just a partial snippet full working example below
# 1. Imports
import sqlalchemy
import databases
# 2. Initialization
DATABASE_URL = "sqlite:///db.sqlite"
database = databases.Database(DATABASE_URL)
metadata = sqlalchemy.MetaData()
# Define models here
# 3. Database creation and tables creation
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.create_all(engine)
For a sample configuration of alembic and more information regarding migrations and database creation visit migrations documentation section.
Package versions
ormar is still under development:
We recommend pinning any dependencies (with i.e. ormar~=0.9.1)
ormar also follows the release numeration that breaking changes bump the major number,
while other changes and fixes bump minor number, so with the latter you should be safe to
update, yet always read the releases docs before.
example: (0.5.2 -> 0.6.0 - breaking, 0.5.2 -> 0.5.3 - non breaking).
Asynchronous Python
Note that ormar is an asynchronous ORM, which means that you have to await the calls to
the methods, that are scheduled for execution in an event loop. Python has a builtin module
asyncio that allows you to do just that.
Note that most "normal" python interpreters do not allow execution of await
outside of a function (because you actually schedule this function for delayed execution
and don't get the result immediately).
In a modern web framework (like fastapi), the framework will handle this for you, but if
you plan to do this on your own you need to perform this manually like described in the
quick start below.
Quick Start
Note that you can find the same script in examples folder on github.
from typing import Optional
import databases
import pydantic
import ormar
import sqlalchemy
DATABASE_URL = "sqlite:///db.sqlite"
base_ormar_config = ormar.OrmarConfig(
database=databases.Database(DATABASE_URL),
metadata=sqlalchemy.MetaData(),
engine=sqlalchemy.create_engine(DATABASE_URL),
)
# Note that all type hints are optional
# below is a perfectly valid model declaration
# class Author(ormar.Model):
# ormar_config = base_ormar_config.copy(tablename="authors")
#
# id = ormar.Integer(primary_key=True) # <= notice no field types
# name = ormar.String(max_length=100)
class Author(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="authors")
id: int = ormar.Integer(primary_key=True)
name: str = ormar.String(max_length=100)
class Book(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="books")
id: int = ormar.Integer(primary_key=True)
author: Optional[Author] = ormar.ForeignKey(Author)
title: str = ormar.String(max_length=100)
year: int = ormar.Integer(nullable=True)
# create the database
# note that in production you should use migrations
# note that this is not required if you connect to existing database
# just to be sure we clear the db before
base_ormar_config.metadata.drop_all(base_ormar_config.engine)
base_ormar_config.metadata.create_all(base_ormar_config.engine)
# all functions below are divided into functionality categories
# note how all functions are defined with async - hence can use await AND needs to
# be awaited on their own
async def create():
# Create some records to work with through QuerySet.create method.
# Note that queryset is exposed on each Model's class as objects
tolkien = await Author.objects.create(name="J.R.R. Tolkien")
await Book.objects.create(author=tolkien, title="The Hobbit", year=1937)
await Book.objects.create(author=tolkien, title="The Lord of the Rings", year=1955)
await Book.objects.create(author=tolkien, title="The Silmarillion", year=1977)
# alternative creation of object divided into 2 steps
sapkowski = Author(name="Andrzej Sapkowski")
# do some stuff
await sapkowski.save()
# or save() after initialization
await Book(author=sapkowski, title="The Witcher", year=1990).save()
await Book(author=sapkowski, title="The Tower of Fools", year=2002).save()
# to read more about inserting data into the database
# visit: https://collerek.github.io/ormar/queries/create/
async def read():
# Fetch an instance, without loading a foreign key relationship on it.
# Django style
book = await Book.objects.get(title="The Hobbit")
# or python style
book = await Book.objects.get(Book.title == "The Hobbit")
book2 = await Book.objects.first()
# first() fetch the instance with lower primary key value
assert book == book2
# you can access all fields on loaded model
assert book.title == "The Hobbit"
assert book.year == 1937
# when no condition is passed to get()
# it behaves as last() based on primary key column
book3 = await Book.objects.get()
assert book3.title == "The Tower of Fools"
# When you have a relation, ormar always defines a related model for you
# even when all you loaded is a foreign key value like in this example
assert isinstance(book.author, Author)
# primary key is populated from foreign key stored in books table
assert book.author.pk == 1
# since the related model was not loaded all other fields are None
assert book.author.name is None
# Load the relationship from the database when you already have the related model
# alternatively see joins section below
await book.author.load()
assert book.author.name == "J.R.R. Tolkien"
# get all rows for given model
authors = await Author.objects.all()
assert len(authors) == 2
# to read more about reading data from the database
# visit: https://collerek.github.io/ormar/queries/read/
async def update():
# read existing row from db
tolkien = await Author.objects.get(name="J.R.R. Tolkien")
assert tolkien.name == "J.R.R. Tolkien"
tolkien_id = tolkien.id
# change the selected property
tolkien.name = "John Ronald Reuel Tolkien"
# call update on a model instance
await tolkien.update()
# confirm that object was updated
tolkien = await Author.objects.get(name="John Ronald Reuel Tolkien")
assert tolkien.name == "John Ronald Reuel Tolkien"
assert tolkien.id == tolkien_id
# alternatively update data without loading
await Author.objects.filter(name__contains="Tolkien").update(name="J.R.R. Tolkien")
# to read more about updating data in the database
# visit: https://collerek.github.io/ormar/queries/update/
async def delete():
silmarillion = await Book.objects.get(year=1977)
# call delete() on instance
await silmarillion.delete()
# alternatively delete without loading
await Book.objects.delete(title="The Tower of Fools")
# note that when there is no record ormar raises NoMatch exception
try:
await Book.objects.get(year=1977)
except ormar.NoMatch:
print("No book from 1977!")
# to read more about deleting data from the database
# visit: https://collerek.github.io/ormar/queries/delete/
# note that despite the fact that record no longer exists in database
# the object above is still accessible and you can use it (and i.e. save()) again.
tolkien = silmarillion.author
await Book.objects.create(author=tolkien, title="The Silmarillion", year=1977)
async def joins():
# Tho join two models use select_related
# Django style
book = await Book.objects.select_related("author").get(title="The Hobbit")
# Python style
book = await Book.objects.select_related(Book.author).get(
Book.title == "The Hobbit"
)
# now the author is already prefetched
assert book.author.name == "J.R.R. Tolkien"
# By default you also get a second side of the relation
# constructed as lowercase source model name +'s' (books in this case)
# you can also provide custom name with parameter related_name
# Django style
author = await Author.objects.select_related("books").all(name="J.R.R. Tolkien")
# Python style
author = await Author.objects.select_related(Author.books).all(
Author.name == "J.R.R. Tolkien"
)
assert len(author[0].books) == 3
# for reverse and many to many relations you can also prefetch_related
# that executes a separate query for each of related models
# Django style
author = await Author.objects.prefetch_related("books").get(name="J.R.R. Tolkien")
# Python style
author = await Author.objects.prefetch_related(Author.books).get(
Author.name == "J.R.R. Tolkien"
)
assert len(author.books) == 3
# to read more about relations
# visit: https://collerek.github.io/ormar/relations/
# to read more about joins and subqueries
# visit: https://collerek.github.io/ormar/queries/joins-and-subqueries/
async def filter_and_sort():
# to filter the query you can use filter() or pass key-value pars to
# get(), all() etc.
# to use special methods or access related model fields use double
# underscore like to filter by the name of the author use author__name
# Django style
books = await Book.objects.all(author__name="J.R.R. Tolkien")
# python style
books = await Book.objects.all(Book.author.name == "J.R.R. Tolkien")
assert len(books) == 3
# filter can accept special methods also separated with double underscore
# to issue sql query ` where authors.name like "%tolkien%"` that is not
# case sensitive (hence small t in Tolkien)
# Django style
books = await Book.objects.filter(author__name__icontains="tolkien").all()
# python style
books = await Book.objects.filter(Book.author.name.icontains("tolkien")).all()
assert len(books) == 3
# to sort use order_by() function of queryset
# to sort decreasing use hyphen before the field name
# same as with filter you can use double underscores to access related fields
# Django style
books = (
await Book.objects.filter(author__name__icontains="tolkien")
.order_by("-year")
.all()
)
# python style
books = (
await Book.objects.filter(Book.author.name.icontains("tolkien"))
.order_by(Book.year.desc())
.all()
)
assert len(books) == 3
assert books[0].title == "The Silmarillion"
assert books[2].title == "The Hobbit"
# to read more about filtering and ordering
# visit: https://collerek.github.io/ormar/queries/filter-and-sort/
async def subset_of_columns():
# to exclude some columns from loading when querying the database
# you can use fields() method
hobbit = await Book.objects.fields(["title"]).get(title="The Hobbit")
# note that fields not included in fields are empty (set to None)
assert hobbit.year is None
assert hobbit.author is None
# selected field is there
assert hobbit.title == "The Hobbit"
# alternatively you can provide columns you want to exclude
hobbit = await Book.objects.exclude_fields(["year"]).get(title="The Hobbit")
# year is still not set
assert hobbit.year is None
# but author is back
assert hobbit.author is not None
# also you cannot exclude primary key column - it's always there
# even if you EXPLICITLY exclude it it will be there
# note that each model have a shortcut for primary_key column which is pk
# and you can filter/access/set the values by this alias like below
assert hobbit.pk is not None
# note that you cannot exclude fields that are not nullable
# (required) in model definition
try:
await Book.objects.exclude_fields(["title"]).get(title="The Hobbit")
except pydantic.ValidationError:
print("Cannot exclude non nullable field title")
# to read more about selecting subset of columns
# visit: https://collerek.github.io/ormar/queries/select-columns/
async def pagination():
# to limit number of returned rows use limit()
books = await Book.objects.limit(1).all()
assert len(books) == 1
assert books[0].title == "The Hobbit"
# to offset number of returned rows use offset()
books = await Book.objects.limit(1).offset(1).all()
assert len(books) == 1
assert books[0].title == "The Lord of the Rings"
# alternatively use paginate that combines both
books = await Book.objects.paginate(page=2, page_size=2).all()
assert len(books) == 2
# note that we removed one book of Sapkowski in delete()
# and recreated The Silmarillion - by default when no order_by is set
# ordering sorts by primary_key column
assert books[0].title == "The Witcher"
assert books[1].title == "The Silmarillion"
# to read more about pagination and number of rows
# visit: https://collerek.github.io/ormar/queries/pagination-and-rows-number/
async def aggregations():
# count:
assert 2 == await Author.objects.count()
# exists
assert await Book.objects.filter(title="The Hobbit").exists()
# maximum
assert 1990 == await Book.objects.max(columns=["year"])
# minimum
assert 1937 == await Book.objects.min(columns=["year"])
# average
assert 1964.75 == await Book.objects.avg(columns=["year"])
# sum
assert 7859 == await Book.objects.sum(columns=["year"])
# to read more about aggregated functions
# visit: https://collerek.github.io/ormar/queries/aggregations/
async def raw_data():
# extract raw data in a form of dicts or tuples
# note that this skips the validation(!) as models are
# not created from parsed data
# get list of objects as dicts
assert await Book.objects.values() == [
{"id": 1, "author": 1, "title": "The Hobbit", "year": 1937},
{"id": 2, "author": 1, "title": "The Lord of the Rings", "year": 1955},
{"id": 4, "author": 2, "title": "The Witcher", "year": 1990},
{"id": 5, "author": 1, "title": "The Silmarillion", "year": 1977},
]
# get list of objects as tuples
assert await Book.objects.values_list() == [
(1, 1, "The Hobbit", 1937),
(2, 1, "The Lord of the Rings", 1955),
(4, 2, "The Witcher", 1990),
(5, 1, "The Silmarillion", 1977),
]
# filter data - note how you always get a list
assert await Book.objects.filter(title="The Hobbit").values() == [
{"id": 1, "author": 1, "title": "The Hobbit", "year": 1937}
]
# select only wanted fields
assert await Book.objects.filter(title="The Hobbit").values(["id", "title"]) == [
{"id": 1, "title": "The Hobbit"}
]
# if you select only one column you could flatten it with values_list
assert await Book.objects.values_list("title", flatten=True) == [
"The Hobbit",
"The Lord of the Rings",
"The Witcher",
"The Silmarillion",
]
# to read more about extracting raw values
# visit: https://collerek.github.io/ormar/queries/aggregations/
async def with_connect(function):
# note that for any other backend than sqlite you actually need to
# connect to the database to perform db operations
async with base_ormar_config.database:
await function()
# note that if you use framework like `fastapi` you shouldn't connect
# in your endpoints but have a global connection pool
# check https://collerek.github.io/ormar/fastapi/ and section with db connection
# gather and execute all functions
# note - normally import should be at the beginning of the file
import asyncio
# note that normally you use gather() function to run several functions
# concurrently but we actually modify the data and we rely on the order of functions
for func in [
create,
read,
update,
delete,
joins,
filter_and_sort,
subset_of_columns,
pagination,
aggregations,
raw_data,
]:
print(f"Executing: {func.__name__}")
asyncio.run(with_connect(func))
# drop the database tables
base_ormar_config.metadata.drop_all(base_ormar_config.engine)
Ormar Specification
QuerySet methods
create(**kwargs): -> Modelget(*args, **kwargs): -> Modelget_or_none(*args, **kwargs): -> Optional[Model]get_or_create(_defaults: Optional[Dict[str, Any]] = None, *args, **kwargs) -> Tuple[Model, bool]first(*args, **kwargs): -> Modelupdate(each: bool = False, **kwargs) -> intupdate_or_create(**kwargs) -> Modelbulk_create(objects: List[Model]) -> Nonebulk_update(objects: List[Model], columns: List[str] = None) -> Nonedelete(*args, each: bool = False, **kwargs) -> intall(*args, **kwargs) -> List[Optional[Model]]iterate(*args, **kwargs) -> AsyncGenerator[Model]filter(*args, **kwargs) -> QuerySetexclude(*args, **kwargs) -> QuerySetselect_related(related: Union[List, str]) -> QuerySetprefetch_related(related: Union[List, str]) -> QuerySetlimit(limit_count: int) -> QuerySetoffset(offset: int) -> QuerySetcount(distinct: bool = True) -> intexists() -> boolmax(columns: List[str]) -> Anymin(columns: List[str]) -> Anyavg(columns: List[str]) -> Anysum(columns: List[str]) -> Anyfields(columns: Union[List, str, set, dict]) -> QuerySetexclude_fields(columns: Union[List, str, set, dict]) -> QuerySetorder_by(columns:Union[List, str]) -> QuerySetvalues(fields: Union[List, str, Set, Dict])values_list(fields: Union[List, str, Set, Dict])
Relation types
- One to many - with
ForeignKey(to: Model) - Many to many - with
ManyToMany(to: Model, Optional[through]: Model)
Model fields types
Available Model Fields (with required args - optional ones in docs):
String(max_length)Text()Boolean()Integer()Float()Date()Time()DateTime()JSON()BigInteger()SmallInteger()Decimal(scale, precision)UUID()LargeBinary(max_length)Enum(enum_class)Enumlike Field - by passingchoicesto any other Field typeEncryptedString- by passingencrypt_secretandencrypt_backendForeignKey(to)ManyToMany(to)
Available fields options
The following keyword arguments are supported on all field types.
primary_key: boolnullable: booldefault: Anyserver_default: Anyindex: boolunique: boolchoices: typing.Sequencename: str
All fields are required unless one of the following is set:
nullable- Creates a nullable column. Sets the default toFalse. Read the fields common parameters for details.sql_nullable- Used to set different setting for pydantic and the database. Sets the default tonullablevalue. Read the fields common parameters for details.default- Set a default value for the field. Not available for relation fieldsserver_default- Set a default value for the field on server side (like sqlalchemy'sfunc.now()). Not available for relation fieldsprimary keywithautoincrement- When a column is set to primary key and autoincrement is set on this column. Autoincrement is set by default on int primary keys.
Available signals
Signals allow to trigger your function for a given event on a given Model.
pre_savepost_savepre_updatepost_updatepre_deletepost_deletepre_relation_addpost_relation_addpre_relation_removepost_relation_removepost_bulk_update