* WIP * WIP - make test_model_definition tests pass * WIP - make test_model_methods pass * WIP - make whole test suit at least run - failing 49/443 tests * WIP fix part of the getting pydantic tests as types of fields are now kept in core schema and not on fieldsinfo * WIP fix validation in update by creating individual fields validators, failing 36/443 * WIP fix __pydantic_extra__ in intializing model, fix test related to pydantic config checks, failing 32/442 * WIP - fix enum schema in model_json_schema, failing 31/442 * WIP - fix copying through model, fix setting pydantic fields on through, fix default config and inheriting from it, failing 26/442 * WIP fix tests checking pydantic schema, fix excluding parent fields, failing 21/442 * WIP some missed files * WIP - fix validators inheritance and fix validators in generated pydantic, failing 17/442 * WIP - fix through models setting - only on reverse side of relation, but always on reverse side, failing 15/442 * WIP - fix through models setting - only on reverse side of relation, but always on reverse side, failing 15/442 * WIP - working on proper populating __dict__ for relations for new schema dumping, some work on openapi docs, failing 13/442 * WIP - remove property fields as pydantic has now computed_field on its own, failing 9/442 * WIP - fixes in docs, failing 8/442 * WIP - fix tests for largebinary schema, wrapped bytes fields fail in pydantic, will be fixed in pydantic-core, remaining is circural schema for related models, failing 6/442 * WIP - fix to pk only models in schemas * Getting test suites to pass (#1249) * wip, fixing tests * iteration, fixing some more tests * iteration, fixing some more tests * adhere to comments * adhere to comments * remove unnecessary dict call, re-add getattribute for testing * todo for reverse relationship * adhere to comments, remove prints * solve circular refs * all tests pass 🎉 * remove 3.7 from tests * add lint and type check jobs * reforat with ruff, fix jobs * rename jobs * fix imports * fix evaluate in py3.8 * partially fix coverage * fix coverage, add more tests * fix test ids * fix test ids * fix lint, fix docs, make docs fully working scripts, add test docs job * fix pyproject * pin py ver in test docs * change dir in test docs * fix pydantic warning hack * rm poetry call in test_docs * switch to pathlib in test docs * remove coverage req test docs * fix type check tests, fix part of types * fix/skip next part of types * fix next part of types * fix next part of types * fix coverage * fix coverage * fix type (bit dirty 🤷) * fix some code smells * change pre-commit * tweak workflows * remove no root from tests * switch to full python path by passing sys.executable * some small refactor in new base model, one sample test, change makefile * small refactors to reduce complexity of methods * temp add tests for prs against pydantic_v2 * remove all references to __fields__ * remove all references to construct, deprecate the method and update model_construct to be in line with pydantic * deprecate dict and add model_dump, todo switch to model_dict in calls * fix tests * change to union * change to union * change to model_dump and model_dump_json from dict and json deprecated methods, deprecate them in ormar too * finish switching dict() -> model_dump() * finish switching json() -> model_dump_json() * remove fully pydantic_only * switch to extra for payment card, change missed json calls * fix coverage - no more warnings internal * fix coverage - no more warnings internal - part 2 * split model_construct into own and pydantic parts * split determine pydantic field type * change to new field validators * fix benchmarks, add codspeed instead of pytest-benchmark, add action and gh workflow * restore pytest-benchmark * remove codspeed * pin pydantic version, restore codspeed * change on push to pydantic_v2 to trigger first one * Use lifespan function instead of event (#1259) * check return types * fix imports order, set warnings=False on json that passes the dict, fix unnecessary loop in one of the test * remove references to model's meta as it's now ormar config, rename related methods too * filter out pydantic serializer warnings * remove choices leftovers * remove leftovers after property_fields, keep only enough to exclude them in initialization * add migration guide * fix meta references * downgrade databases for now * Change line numbers in documentation (#1265) * proofread and fix the docs, part 1 * proofread and fix the docs for models * proofread and fix the docs for fields * proofread and fix the docs for relations * proofread and fix rest of the docs, add release notes for 0.20 * create tables in new docs src * cleanup old deps, uncomment docs publish on tag * fix import reorder --------- Co-authored-by: TouwaStar <30479449+TouwaStar@users.noreply.github.com> Co-authored-by: Goran Mekić <meka@tilda.center>
7.7 KiB
Read data from database
Following methods allow you to load data from the database.
-
get(*args, **kwargs) -> Model -
get_or_create(_defaults: Optional[Dict[str, Any]] = None, *args, **kwargs) -> Tuple[Model, bool] -
first(*args, **kwargs) -> Model -
all(*args, **kwargs) -> List[Optional[Model]] -
iterate(*args, **kwargs) -> AsyncGenerator[Model] -
ModelModel.load()method
-
QuerysetProxyQuerysetProxy.get(*args, **kwargs)methodQuerysetProxy.get_or_create(_defaults: Optional[Dict[str, Any]] = None, *args, **kwargs)methodQuerysetProxy.first(*args, **kwargs)methodQuerysetProxy.all(*args, **kwargs)method
get
get(*args, **kwargs) -> Model
Gets the first row from the db meeting the criteria set by kwargs.
If no criteria set it will return the last row in db sorted by pk column.
Passing a criteria is actually calling filter(*args, **kwargs) method described below.
class Track(ormar.Model):
ormar_config = ormar.OrmarConfig(
database=database,
metadata=metadata,
tablename="track"
)
id: int = ormar.Integer(primary_key=True)
album: Optional[Album] = ormar.ForeignKey(Album)
name: str = ormar.String(max_length=100)
position: int = ormar.Integer()
track = await Track.objects.get(name='The Bird')
# note that above is equivalent to await Track.objects.filter(name='The Bird').get()
track2 = track = await Track.objects.get()
track == track2
# True since it's the only row in db in our example
# and get without arguments return first row by pk column desc
!!!warning
If no row meets the criteria NoMatch exception is raised.
If there are multiple rows meeting the criteria the `MultipleMatches` exception is raised.
get_or_none
get_or_none(*args, **kwargs) -> Model
Exact equivalent of get described above but instead of raising the exception returns None if no db record matching the criteria is found.
get_or_create
get_or_create(_defaults: Optional[Dict[str, Any]] = None, *args, **kwargs) -> Tuple[Model, bool]
Combination of create and get methods.
Tries to get a row meeting the criteria and if NoMatch exception is raised it creates
a new one with given kwargs and _defaults.
class Album(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="album")
id: int = ormar.Integer(primary_key=True)
name: str = ormar.String(max_length=100)
year: int = ormar.Integer()
album, created = await Album.objects.get_or_create(name='The Cat', _defaults={"year": 1999})
assert created is True
assert album.name == "The Cat"
assert album.year == 1999
# object is created as it does not exist
album2, created = await Album.objects.get_or_create(name='The Cat')
assert created is False
assert album == album2
# return True as the same db row is returned
!!!warning
Despite being an equivalent row from database the album and album2 in
example above are 2 different python objects!
Updating one of them will not refresh the second one until you explicitly load() the
fresh data from db.
!!!note Note that if you want to create a new object you either have to pass pk column value or pk column has to be set as autoincrement
first
first(*args, **kwargs) -> Model
Gets the first row from the db ordered by primary key column ascending.
class Album(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="album")
id: int = ormar.Integer(primary_key=True)
name: str = ormar.String(max_length=100)
await Album.objects.create(name='The Cat')
await Album.objects.create(name='The Dog')
album = await Album.objects.first()
# first row by primary_key column asc
assert album.name == 'The Cat'
all
all(*args, **kwargs) -> List[Optional["Model"]]
Returns all rows from a database for given model for set filter options.
Passing kwargs is a shortcut and equals to calling filter(*args, **kwargs).all().
If there are no rows meeting the criteria an empty list is returned.
class Album(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="album")
id: int = ormar.Integer(primary_key=True)
name: str = ormar.String(max_length=100)
class Track(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="track")
id: int = ormar.Integer(primary_key=True)
album: Optional[Album] = ormar.ForeignKey(Album)
title: str = ormar.String(max_length=100)
position: int = ormar.Integer()
tracks = await Track.objects.select_related("album").all(album__title='Sample')
# will return a list of all Tracks for album Sample
# for more on joins visit joining and subqueries section
tracks = await Track.objects.all()
# will return a list of all Tracks in database
iterate
iterate(*args, **kwargs) -> AsyncGenerator["Model"]
Return async iterable generator for all rows from a database for given model.
Passing args and/or kwargs is a shortcut and equals to calling filter(*args, **kwargs).iterate().
If there are no rows meeting the criteria an empty async generator is returned.
class Album(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="album")
id: int = ormar.Integer(primary_key=True)
name: str = ormar.String(max_length=100)
await Album.objects.create(name='The Cat')
await Album.objects.create(name='The Dog')
# will asynchronously iterate all Album models yielding one main model at a time from the generator
async for album in Album.objects.iterate():
print(album.name)
# The Cat
# The Dog
!!!warning
Use of iterate() causes previous prefetch_related() calls to be ignored;
since these two optimizations do not make sense together.
If `iterate()` & `prefetch_related()` are used together the `QueryDefinitionError` exception is raised.
Model methods
Each model instance have a set of methods to save, update or load itself.
load
You can load the ForeignKey related model by calling load() method.
load() can be used to refresh the model from the database (if it was changed by some other process).
!!!tip
Read more about load() method in models methods
QuerysetProxy methods
When access directly the related ManyToMany field as well as ReverseForeignKey
returns the list of related models.
But at the same time it exposes subset of QuerySet API, so you can filter, create, select related etc related models directly from parent model.
get
Works exactly the same as get function above but allows you to fetch related objects from other side of the relation.
!!!tip
To read more about QuerysetProxy visit querysetproxy section
get_or_none
Exact equivalent of get described above but instead of raising the exception returns None if no db record matching the criteria is found.
!!!tip
To read more about QuerysetProxy visit querysetproxy section
get_or_create
Works exactly the same as get_or_create function above but allows you to query or create related objects from other side of the relation.
!!!tip
To read more about QuerysetProxy visit querysetproxy section
first
Works exactly the same as first function above but allows you to query related objects from other side of the relation.
!!!tip
To read more about QuerysetProxy visit querysetproxy section
all
Works exactly the same as all function above but allows you to query related objects from other side of the relation.
!!!tip
To read more about QuerysetProxy visit querysetproxy section