Change docs provider (#652)

* switch to mkdocstrings

* update lock
This commit is contained in:
collerek
2022-05-06 12:50:26 +02:00
committed by GitHub
parent 24b6ff4781
commit 1a99a65eb4
55 changed files with 304 additions and 9702 deletions

View File

@ -9,18 +9,16 @@ jobs:
steps: steps:
- name: Checkout Master - name: Checkout Master
uses: actions/checkout@v3 uses: actions/checkout@v3
- name: Set up Python 3.7 - name: Set up Python 3.8
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: '3.7' python-version: '3.8'
- name: Install dependencies - name: Install dependencies
run: | run: |
python3.7 -m pip install --upgrade pip python -m pip install poetry==1.1.11
python3.7 -m pip install "mkdocs-material>=8.1.4,<9.0.0" "pydoc-markdown==3.13" poetry install
- name: Build API docs env:
run: pydoc-markdown --build --site-dir=api POETRY_VIRTUALENVS_CREATE: false
- name: Copy APi docs
run: cp -Tavr ./build/docs/content/ ./docs/api/
- name: Deploy - name: Deploy
run: | run: |
mkdocs gh-deploy --force mkdocs gh-deploy --force

View File

@ -1,89 +0,0 @@
<a name="exceptions"></a>
# exceptions
Gathers all exceptions thrown by ormar.
<a name="exceptions.AsyncOrmException"></a>
## AsyncOrmException Objects
```python
class AsyncOrmException(Exception)
```
Base ormar Exception
<a name="exceptions.ModelDefinitionError"></a>
## ModelDefinitionError Objects
```python
class ModelDefinitionError(AsyncOrmException)
```
Raised for errors related to the model definition itself:
* setting @property_field on method with arguments other than func(self)
* defining a Field without required parameters
* defining a model with more than one primary_key
* defining a model without primary_key
* setting primary_key column as pydantic_only
<a name="exceptions.ModelError"></a>
## ModelError Objects
```python
class ModelError(AsyncOrmException)
```
Raised for initialization of model with non-existing field keyword.
<a name="exceptions.NoMatch"></a>
## NoMatch Objects
```python
class NoMatch(AsyncOrmException)
```
Raised for database queries that has no matching result (empty result).
<a name="exceptions.MultipleMatches"></a>
## MultipleMatches Objects
```python
class MultipleMatches(AsyncOrmException)
```
Raised for database queries that should return one row (i.e. get, first etc.)
but has multiple matching results in response.
<a name="exceptions.QueryDefinitionError"></a>
## QueryDefinitionError Objects
```python
class QueryDefinitionError(AsyncOrmException)
```
Raised for errors in query definition:
* using contains or icontains filter with instance of the Model
* using Queryset.update() without filter and setting each flag to True
* using Queryset.delete() without filter and setting each flag to True
<a name="exceptions.ModelPersistenceError"></a>
## ModelPersistenceError Objects
```python
class ModelPersistenceError(AsyncOrmException)
```
Raised for update of models without primary_key set (cannot retrieve from db)
or for saving a model with relation to unsaved model (cannot extract fk value).
<a name="exceptions.SignalDefinitionError"></a>
## SignalDefinitionError Objects
```python
class SignalDefinitionError(AsyncOrmException)
```
Raised when non callable receiver is passed as signal callback.

View File

@ -1,276 +0,0 @@
<a name="fields.base"></a>
# fields.base
<a name="fields.base.BaseField"></a>
## BaseField Objects
```python
class BaseField(FieldInfo)
```
BaseField serves as a parent class for all basic Fields in ormar.
It keeps all common parameters available for all fields as well as
set of useful functions.
All values are kept as class variables, ormar Fields are never instantiated.
Subclasses pydantic.FieldInfo to keep the fields related
to pydantic field types like ConstrainedStr
<a name="fields.base.BaseField.is_valid_uni_relation"></a>
#### is\_valid\_uni\_relation
```python
| is_valid_uni_relation() -> bool
```
Checks if field is a relation definition but only for ForeignKey relation,
so excludes ManyToMany fields, as well as virtual ForeignKey
(second side of FK relation).
Is used to define if a field is a db ForeignKey column that
should be saved/populated when dealing with internal/own
Model columns only.
**Returns**:
`bool`: result of the check
<a name="fields.base.BaseField.get_alias"></a>
#### get\_alias
```python
| get_alias() -> str
```
Used to translate Model column names to database column names during db queries.
**Returns**:
`str`: returns custom database column name if defined by user,
<a name="fields.base.BaseField.get_pydantic_default"></a>
#### get\_pydantic\_default
```python
| get_pydantic_default() -> Dict
```
Generates base pydantic.FieldInfo with only default and optionally
required to fix pydantic Json field being set to required=False.
Used in an ormar Model Metaclass.
**Returns**:
`pydantic.FieldInfo`: instance of base pydantic.FieldInfo
<a name="fields.base.BaseField.default_value"></a>
#### default\_value
```python
| default_value(use_server: bool = False) -> Optional[Dict]
```
Returns a FieldInfo instance with populated default
(static) or default_factory (function).
If the field is a autoincrement primary key the default is None.
Otherwise field have to has either default, or default_factory populated.
If all default conditions fail None is returned.
Used in converting to pydantic FieldInfo.
**Arguments**:
treated as default value, default False
- `use_server` (`bool`): flag marking if server_default should be
**Returns**:
`Optional[pydantic.FieldInfo]`: returns a call to pydantic.Field
<a name="fields.base.BaseField.get_default"></a>
#### get\_default
```python
| get_default(use_server: bool = False) -> Any
```
Return default value for a field.
If the field is Callable the function is called and actual result is returned.
Used to populate default_values for pydantic Model in ormar Model Metaclass.
**Arguments**:
treated as default value, default False
- `use_server` (`bool`): flag marking if server_default should be
**Returns**:
`Any`: default value for the field if set, otherwise implicit None
<a name="fields.base.BaseField.has_default"></a>
#### has\_default
```python
| has_default(use_server: bool = True) -> bool
```
Checks if the field has default value set.
**Arguments**:
treated as default value, default False
- `use_server` (`bool`): flag marking if server_default should be
**Returns**:
`bool`: result of the check if default value is set
<a name="fields.base.BaseField.is_auto_primary_key"></a>
#### is\_auto\_primary\_key
```python
| is_auto_primary_key() -> bool
```
Checks if field is first a primary key and if it,
it's than check if it's set to autoincrement.
Autoincrement primary_key is nullable/optional.
**Returns**:
`bool`: result of the check for primary key and autoincrement
<a name="fields.base.BaseField.construct_constraints"></a>
#### construct\_constraints
```python
| construct_constraints() -> List
```
Converts list of ormar constraints into sqlalchemy ForeignKeys.
Has to be done dynamically as sqlalchemy binds ForeignKey to the table.
And we need a new ForeignKey for subclasses of current model
**Returns**:
`List[sqlalchemy.schema.ForeignKey]`: List of sqlalchemy foreign keys - by default one.
<a name="fields.base.BaseField.get_column"></a>
#### get\_column
```python
| get_column(name: str) -> sqlalchemy.Column
```
Returns definition of sqlalchemy.Column used in creation of sqlalchemy.Table.
Populates name, column type constraints, as well as a number of parameters like
primary_key, index, unique, nullable, default and server_default.
**Arguments**:
- `name` (`str`): name of the db column - used if alias is not set
**Returns**:
`sqlalchemy.Column`: actual definition of the database column as sqlalchemy requires.
<a name="fields.base.BaseField._get_encrypted_column"></a>
#### \_get\_encrypted\_column
```python
| _get_encrypted_column(name: str) -> sqlalchemy.Column
```
Returns EncryptedString column type instead of actual column.
**Arguments**:
- `name` (`str`): column name
**Returns**:
`sqlalchemy.Column`: newly defined column
<a name="fields.base.BaseField.expand_relationship"></a>
#### expand\_relationship
```python
| expand_relationship(value: Any, child: Union["Model", "NewBaseModel"], to_register: bool = True) -> Any
```
Function overwritten for relations, in basic field the value is returned as is.
For relations the child model is first constructed (if needed),
registered in relation and returned.
For relation fields the value can be a pk value (Any type of field),
dict (from Model) or actual instance/list of a "Model".
**Arguments**:
- `value` (`Any`): a Model field value, returned untouched for non relation fields.
- `child` (`Union["Model", "NewBaseModel"]`): a child Model to register
- `to_register` (`bool`): flag if the relation should be set in RelationshipManager
**Returns**:
`Any`: returns untouched value for normal fields, expands only for relations
<a name="fields.base.BaseField.set_self_reference_flag"></a>
#### set\_self\_reference\_flag
```python
| set_self_reference_flag() -> None
```
Sets `self_reference` to True if field to and owner are same model.
**Returns**:
`None`: None
<a name="fields.base.BaseField.has_unresolved_forward_refs"></a>
#### has\_unresolved\_forward\_refs
```python
| has_unresolved_forward_refs() -> bool
```
Verifies if the filed has any ForwardRefs that require updating before the
model can be used.
**Returns**:
`bool`: result of the check
<a name="fields.base.BaseField.evaluate_forward_ref"></a>
#### evaluate\_forward\_ref
```python
| evaluate_forward_ref(globalns: Any, localns: Any) -> None
```
Evaluates the ForwardRef to actual Field based on global and local namespaces
**Arguments**:
- `globalns` (`Any`): global namespace
- `localns` (`Any`): local namespace
**Returns**:
`None`: None
<a name="fields.base.BaseField.get_related_name"></a>
#### get\_related\_name
```python
| get_related_name() -> str
```
Returns name to use for reverse relation.
It's either set as `related_name` or by default it's owner model. get_name + 's'
**Returns**:
`str`: name of the related_name or default related name.

View File

@ -1,28 +0,0 @@
<a name="decorators.property_field"></a>
# decorators.property\_field
<a name="decorators.property_field.property_field"></a>
#### property\_field
```python
property_field(func: Callable) -> Union[property, Callable]
```
Decorator to set a property like function on Model to be exposed
as field in dict() and fastapi response.
Although you can decorate a @property field like this and this will work,
mypy validation will complain about this.
Note that "fields" exposed like this do not go through validation.
**Raises**:
- `ModelDefinitionError`: if method has any other argument than self.
**Arguments**:
- `func` (`Callable`): decorated function to be exposed
**Returns**:
`Union[property, Callable]`: decorated function passed in func param, with set __property_field__ = True

View File

@ -1,396 +0,0 @@
<a name="fields.foreign_key"></a>
# fields.foreign\_key
<a name="fields.foreign_key.create_dummy_instance"></a>
#### create\_dummy\_instance
```python
create_dummy_instance(fk: Type["T"], pk: Any = None) -> "T"
```
Ormar never returns you a raw data.
So if you have a related field that has a value populated
it will construct you a Model instance out of it.
Creates a "fake" instance of passed Model from pk value.
The instantiated Model has only pk value filled.
To achieve this __pk_only__ flag has to be passed as it skips the validation.
If the nested related Models are required they are set with -1 as pk value.
**Arguments**:
- `fk` (`Model class`): class of the related Model to which instance should be constructed
- `pk` (`Any`): value of the primary_key column
**Returns**:
`Model`: Model instance populated with only pk
<a name="fields.foreign_key.create_dummy_model"></a>
#### create\_dummy\_model
```python
create_dummy_model(base_model: Type["T"], pk_field: Union[BaseField, "ForeignKeyField", "ManyToManyField"]) -> Type["BaseModel"]
```
Used to construct a dummy pydantic model for type hints and pydantic validation.
Populates only pk field and set it to desired type.
**Arguments**:
- `base_model` (`Model class`): class of target dummy model
- `pk_field` (`Union[BaseField, "ForeignKeyField", "ManyToManyField"]`): ormar Field to be set on pydantic Model
**Returns**:
`pydantic.BaseModel`: constructed dummy model
<a name="fields.foreign_key.populate_fk_params_based_on_to_model"></a>
#### populate\_fk\_params\_based\_on\_to\_model
```python
populate_fk_params_based_on_to_model(to: Type["T"], nullable: bool, onupdate: str = None, ondelete: str = None) -> Tuple[Any, List, Any]
```
Based on target to model to which relation leads to populates the type of the
pydantic field to use, ForeignKey constraint and type of the target column field.
**Arguments**:
How to treat child rows on update of parent (the one where FK is defined) model.
How to treat child rows on delete of parent (the one where FK is defined) model.
- `to` (`Model class`): target related ormar Model
- `nullable` (`bool`): marks field as optional/ required
- `onupdate` (`str`): parameter passed to sqlalchemy.ForeignKey.
- `ondelete` (`str`): parameter passed to sqlalchemy.ForeignKey.
**Returns**:
`Tuple[Any, List, Any]`: tuple with target pydantic type, list of fk constraints and target col type
<a name="fields.foreign_key.validate_not_allowed_fields"></a>
#### validate\_not\_allowed\_fields
```python
validate_not_allowed_fields(kwargs: Dict) -> None
```
Verifies if not allowed parameters are set on relation models.
Usually they are omitted later anyway but this way it's explicitly
notify the user that it's not allowed/ supported.
**Raises**:
- `ModelDefinitionError`: if any forbidden field is set
**Arguments**:
- `kwargs` (`Dict`): dict of kwargs to verify passed to relation field
<a name="fields.foreign_key.UniqueColumns"></a>
## UniqueColumns Objects
```python
class UniqueColumns(UniqueConstraint)
```
Subclass of sqlalchemy.UniqueConstraint.
Used to avoid importing anything from sqlalchemy by user.
<a name="fields.foreign_key.ForeignKeyConstraint"></a>
## ForeignKeyConstraint Objects
```python
@dataclass
class ForeignKeyConstraint()
```
Internal container to store ForeignKey definitions used later
to produce sqlalchemy.ForeignKeys
<a name="fields.foreign_key.ForeignKey"></a>
#### ForeignKey
```python
ForeignKey(to: "ToType", *, name: str = None, unique: bool = False, nullable: bool = True, related_name: str = None, virtual: bool = False, onupdate: str = None, ondelete: str = None, **kwargs: Any, ,) -> "T"
```
Despite a name it's a function that returns constructed ForeignKeyField.
This function is actually used in model declaration (as ormar.ForeignKey(ToModel)).
Accepts number of relation setting parameters as well as all BaseField ones.
**Arguments**:
It is for reversed FK and auto generated FK on through model in Many2Many relations.
How to treat child rows on update of parent (the one where FK is defined) model.
How to treat child rows on delete of parent (the one where FK is defined) model.
- `to` (`Model class`): target related ormar Model
- `name` (`str`): name of the database field - later called alias
- `unique` (`bool`): parameter passed to sqlalchemy.ForeignKey, unique flag
- `nullable` (`bool`): marks field as optional/ required
- `related_name` (`str`): name of reversed FK relation populated for you on to model
- `virtual` (`bool`): marks if relation is virtual.
- `onupdate` (`str`): parameter passed to sqlalchemy.ForeignKey.
- `ondelete` (`str`): parameter passed to sqlalchemy.ForeignKey.
- `kwargs` (`Any`): all other args to be populated by BaseField
**Returns**:
`ForeignKeyField`: ormar ForeignKeyField with relation to selected model
<a name="fields.foreign_key.ForeignKeyField"></a>
## ForeignKeyField Objects
```python
class ForeignKeyField(BaseField)
```
Actual class returned from ForeignKey function call and stored in model_fields.
<a name="fields.foreign_key.ForeignKeyField.get_source_related_name"></a>
#### get\_source\_related\_name
```python
| get_source_related_name() -> str
```
Returns name to use for source relation name.
For FK it's the same, differs for m2m fields.
It's either set as `related_name` or by default it's owner model. get_name + 's'
**Returns**:
`str`: name of the related_name or default related name.
<a name="fields.foreign_key.ForeignKeyField.get_related_name"></a>
#### get\_related\_name
```python
| get_related_name() -> str
```
Returns name to use for reverse relation.
It's either set as `related_name` or by default it's owner model. get_name + 's'
**Returns**:
`str`: name of the related_name or default related name.
<a name="fields.foreign_key.ForeignKeyField.default_target_field_name"></a>
#### default\_target\_field\_name
```python
| default_target_field_name() -> str
```
Returns default target model name on through model.
**Returns**:
`str`: name of the field
<a name="fields.foreign_key.ForeignKeyField.default_source_field_name"></a>
#### default\_source\_field\_name
```python
| default_source_field_name() -> str
```
Returns default target model name on through model.
**Returns**:
`str`: name of the field
<a name="fields.foreign_key.ForeignKeyField.evaluate_forward_ref"></a>
#### evaluate\_forward\_ref
```python
| evaluate_forward_ref(globalns: Any, localns: Any) -> None
```
Evaluates the ForwardRef to actual Field based on global and local namespaces
**Arguments**:
- `globalns` (`Any`): global namespace
- `localns` (`Any`): local namespace
**Returns**:
`None`: None
<a name="fields.foreign_key.ForeignKeyField._extract_model_from_sequence"></a>
#### \_extract\_model\_from\_sequence
```python
| _extract_model_from_sequence(value: List, child: "Model", to_register: bool) -> List["Model"]
```
Takes a list of Models and registers them on parent.
Registration is mutual, so children have also reference to parent.
Used in reverse FK relations.
**Arguments**:
- `value` (`List`): list of Model
- `child` (`Model`): child/ related Model
- `to_register` (`bool`): flag if the relation should be set in RelationshipManager
**Returns**:
`List["Model"]`: list (if needed) registered Models
<a name="fields.foreign_key.ForeignKeyField._register_existing_model"></a>
#### \_register\_existing\_model
```python
| _register_existing_model(value: "Model", child: "Model", to_register: bool) -> "Model"
```
Takes already created instance and registers it for parent.
Registration is mutual, so children have also reference to parent.
Used in reverse FK relations and normal FK for single models.
**Arguments**:
- `value` (`Model`): already instantiated Model
- `child` (`Model`): child/ related Model
- `to_register` (`bool`): flag if the relation should be set in RelationshipManager
**Returns**:
`Model`: (if needed) registered Model
<a name="fields.foreign_key.ForeignKeyField._construct_model_from_dict"></a>
#### \_construct\_model\_from\_dict
```python
| _construct_model_from_dict(value: dict, child: "Model", to_register: bool) -> "Model"
```
Takes a dictionary, creates a instance and registers it for parent.
If dictionary contains only one field and it's a pk it is a __pk_only__ model.
Registration is mutual, so children have also reference to parent.
Used in normal FK for dictionaries.
**Arguments**:
- `value` (`dict`): dictionary of a Model
- `child` (`Model`): child/ related Model
- `to_register` (`bool`): flag if the relation should be set in RelationshipManager
**Returns**:
`Model`: (if needed) registered Model
<a name="fields.foreign_key.ForeignKeyField._construct_model_from_pk"></a>
#### \_construct\_model\_from\_pk
```python
| _construct_model_from_pk(value: Any, child: "Model", to_register: bool) -> "Model"
```
Takes a pk value, creates a dummy instance and registers it for parent.
Registration is mutual, so children have also reference to parent.
Used in normal FK for dictionaries.
**Arguments**:
- `value` (`Any`): value of a related pk / fk column
- `child` (`Model`): child/ related Model
- `to_register` (`bool`): flag if the relation should be set in RelationshipManager
**Returns**:
`Model`: (if needed) registered Model
<a name="fields.foreign_key.ForeignKeyField.register_relation"></a>
#### register\_relation
```python
| register_relation(model: "Model", child: "Model") -> None
```
Registers relation between parent and child in relation manager.
Relation manager is kep on each model (different instance).
Used in Metaclass and sometimes some relations are missing
(i.e. cloned Models in fastapi might miss one).
**Arguments**:
- `model` (`Model class`): parent model (with relation definition)
- `child` (`Model class`): child model
<a name="fields.foreign_key.ForeignKeyField.has_unresolved_forward_refs"></a>
#### has\_unresolved\_forward\_refs
```python
| has_unresolved_forward_refs() -> bool
```
Verifies if the filed has any ForwardRefs that require updating before the
model can be used.
**Returns**:
`bool`: result of the check
<a name="fields.foreign_key.ForeignKeyField.expand_relationship"></a>
#### expand\_relationship
```python
| expand_relationship(value: Any, child: Union["Model", "NewBaseModel"], to_register: bool = True) -> Optional[Union["Model", List["Model"]]]
```
For relations the child model is first constructed (if needed),
registered in relation and returned.
For relation fields the value can be a pk value (Any type of field),
dict (from Model) or actual instance/list of a "Model".
Selects the appropriate constructor based on a passed value.
**Arguments**:
- `value` (`Any`): a Model field value, returned untouched for non relation fields.
- `child` (`Union["Model", "NewBaseModel"]`): a child Model to register
- `to_register` (`bool`): flag if the relation should be set in RelationshipManager
**Returns**:
`Optional[Union["Model", List["Model"]]]`: returns a Model or a list of Models
<a name="fields.foreign_key.ForeignKeyField.get_relation_name"></a>
#### get\_relation\_name
```python
| get_relation_name() -> str
```
Returns name of the relation, which can be a own name or through model
names for m2m models
**Returns**:
`bool`: result of the check
<a name="fields.foreign_key.ForeignKeyField.get_source_model"></a>
#### get\_source\_model
```python
| get_source_model() -> Type["Model"]
```
Returns model from which the relation comes -> either owner or through model
**Returns**:
`Type["Model"]`: source model

View File

@ -1,154 +0,0 @@
<a name="fields.many_to_many"></a>
# fields.many\_to\_many
<a name="fields.many_to_many.forbid_through_relations"></a>
#### forbid\_through\_relations
```python
forbid_through_relations(through: Type["Model"]) -> None
```
Verifies if the through model does not have relations.
**Arguments**:
- `through` (`Type['Model]`): through Model to be checked
<a name="fields.many_to_many.populate_m2m_params_based_on_to_model"></a>
#### populate\_m2m\_params\_based\_on\_to\_model
```python
populate_m2m_params_based_on_to_model(to: Type["Model"], nullable: bool) -> Tuple[Any, Any]
```
Based on target to model to which relation leads to populates the type of the
pydantic field to use and type of the target column field.
**Arguments**:
- `to` (`Model class`): target related ormar Model
- `nullable` (`bool`): marks field as optional/ required
**Returns**:
`tuple with target pydantic type and target col type`: Tuple[List, Any]
<a name="fields.many_to_many.ManyToMany"></a>
#### ManyToMany
```python
ManyToMany(to: "ToType", through: Optional["ToType"] = None, *, name: str = None, unique: bool = False, virtual: bool = False, **kwargs: Any, ,) -> "RelationProxy[T]"
```
Despite a name it's a function that returns constructed ManyToManyField.
This function is actually used in model declaration
(as ormar.ManyToMany(ToModel, through=ThroughModel)).
Accepts number of relation setting parameters as well as all BaseField ones.
**Arguments**:
It is for reversed FK and auto generated FK on through model in Many2Many relations.
- `to` (`Model class`): target related ormar Model
- `through` (`Model class`): through model for m2m relation
- `name` (`str`): name of the database field - later called alias
- `unique` (`bool`): parameter passed to sqlalchemy.ForeignKey, unique flag
- `virtual` (`bool`): marks if relation is virtual.
- `kwargs` (`Any`): all other args to be populated by BaseField
**Returns**:
`ManyToManyField`: ormar ManyToManyField with m2m relation to selected model
<a name="fields.many_to_many.ManyToManyField"></a>
## ManyToManyField Objects
```python
class ManyToManyField(ForeignKeyField, ormar.QuerySetProtocol, ormar.RelationProtocol)
```
Actual class returned from ManyToMany function call and stored in model_fields.
<a name="fields.many_to_many.ManyToManyField.get_source_related_name"></a>
#### get\_source\_related\_name
```python
| get_source_related_name() -> str
```
Returns name to use for source relation name.
For FK it's the same, differs for m2m fields.
It's either set as `related_name` or by default it's field name.
**Returns**:
`str`: name of the related_name or default related name.
<a name="fields.many_to_many.ManyToManyField.has_unresolved_forward_refs"></a>
#### has\_unresolved\_forward\_refs
```python
| has_unresolved_forward_refs() -> bool
```
Verifies if the filed has any ForwardRefs that require updating before the
model can be used.
**Returns**:
`bool`: result of the check
<a name="fields.many_to_many.ManyToManyField.evaluate_forward_ref"></a>
#### evaluate\_forward\_ref
```python
| evaluate_forward_ref(globalns: Any, localns: Any) -> None
```
Evaluates the ForwardRef to actual Field based on global and local namespaces
**Arguments**:
- `globalns` (`Any`): global namespace
- `localns` (`Any`): local namespace
**Returns**:
`None`: None
<a name="fields.many_to_many.ManyToManyField.get_relation_name"></a>
#### get\_relation\_name
```python
| get_relation_name() -> str
```
Returns name of the relation, which can be a own name or through model
names for m2m models
**Returns**:
`bool`: result of the check
<a name="fields.many_to_many.ManyToManyField.get_source_model"></a>
#### get\_source\_model
```python
| get_source_model() -> Type["Model"]
```
Returns model from which the relation comes -> either owner or through model
**Returns**:
`Type["Model"]`: source model
<a name="fields.many_to_many.ManyToManyField.create_default_through_model"></a>
#### create\_default\_through\_model
```python
| create_default_through_model() -> None
```
Creates default empty through model if no additional fields are required.

View File

@ -1,447 +0,0 @@
<a name="fields.model_fields"></a>
# fields.model\_fields
<a name="fields.model_fields.is_field_nullable"></a>
#### is\_field\_nullable
```python
is_field_nullable(nullable: Optional[bool], default: Any, server_default: Any, pydantic_only: Optional[bool]) -> bool
```
Checks if the given field should be nullable/ optional based on parameters given.
**Arguments**:
- `nullable` (`Optional[bool]`): flag explicit setting a column as nullable
- `default` (`Any`): value or function to be called as default in python
- `server_default` (`Any`): function to be called as default by sql server
- `pydantic_only` (`Optional[bool]`): flag if fields should not be included in the sql table
**Returns**:
`bool`: result of the check
<a name="fields.model_fields.is_auto_primary_key"></a>
#### is\_auto\_primary\_key
```python
is_auto_primary_key(primary_key: bool, autoincrement: bool) -> bool
```
Checks if field is an autoincrement pk -> if yes it's optional.
**Arguments**:
- `primary_key` (`bool`): flag if field is a pk field
- `autoincrement` (`bool`): flag if field should be autoincrement
**Returns**:
`bool`: result of the check
<a name="fields.model_fields.ModelFieldFactory"></a>
## ModelFieldFactory Objects
```python
class ModelFieldFactory()
```
Default field factory that construct Field classes and populated their values.
<a name="fields.model_fields.ModelFieldFactory.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.ModelFieldFactory.validate"></a>
#### validate
```python
| @classmethod
| validate(cls, **kwargs: Any) -> None
```
Used to validate if all required parameters on a given field type are set.
**Arguments**:
- `kwargs` (`Any`): all params passed during construction
<a name="fields.model_fields.String"></a>
## String Objects
```python
class String(ModelFieldFactory, str)
```
String field factory that construct Field classes and populated their values.
<a name="fields.model_fields.String.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.String.validate"></a>
#### validate
```python
| @classmethod
| validate(cls, **kwargs: Any) -> None
```
Used to validate if all required parameters on a given field type are set.
**Arguments**:
- `kwargs` (`Any`): all params passed during construction
<a name="fields.model_fields.Integer"></a>
## Integer Objects
```python
class Integer(ModelFieldFactory, int)
```
Integer field factory that construct Field classes and populated their values.
<a name="fields.model_fields.Integer.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.Text"></a>
## Text Objects
```python
class Text(ModelFieldFactory, str)
```
Text field factory that construct Field classes and populated their values.
<a name="fields.model_fields.Text.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.Float"></a>
## Float Objects
```python
class Float(ModelFieldFactory, float)
```
Float field factory that construct Field classes and populated their values.
<a name="fields.model_fields.Float.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.DateTime"></a>
## DateTime Objects
```python
class DateTime(ModelFieldFactory, datetime.datetime)
```
DateTime field factory that construct Field classes and populated their values.
<a name="fields.model_fields.DateTime.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.Date"></a>
## Date Objects
```python
class Date(ModelFieldFactory, datetime.date)
```
Date field factory that construct Field classes and populated their values.
<a name="fields.model_fields.Date.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.Time"></a>
## Time Objects
```python
class Time(ModelFieldFactory, datetime.time)
```
Time field factory that construct Field classes and populated their values.
<a name="fields.model_fields.Time.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.JSON"></a>
## JSON Objects
```python
class JSON(ModelFieldFactory, pydantic.Json)
```
JSON field factory that construct Field classes and populated their values.
<a name="fields.model_fields.JSON.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.BigInteger"></a>
## BigInteger Objects
```python
class BigInteger(Integer, int)
```
BigInteger field factory that construct Field classes and populated their values.
<a name="fields.model_fields.BigInteger.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.SmallInteger"></a>
## SmallInteger Objects
```python
class SmallInteger(Integer, int)
```
SmallInteger field factory that construct Field classes and populated their values.
<a name="fields.model_fields.SmallInteger.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.Decimal"></a>
## Decimal Objects
```python
class Decimal(ModelFieldFactory, decimal.Decimal)
```
Decimal field factory that construct Field classes and populated their values.
<a name="fields.model_fields.Decimal.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options
<a name="fields.model_fields.Decimal.validate"></a>
#### validate
```python
| @classmethod
| validate(cls, **kwargs: Any) -> None
```
Used to validate if all required parameters on a given field type are set.
**Arguments**:
- `kwargs` (`Any`): all params passed during construction
<a name="fields.model_fields.UUID"></a>
## UUID Objects
```python
class UUID(ModelFieldFactory, uuid.UUID)
```
UUID field factory that construct Field classes and populated their values.
<a name="fields.model_fields.UUID.get_column_type"></a>
#### get\_column\_type
```python
| @classmethod
| get_column_type(cls, **kwargs: Any) -> Any
```
Return proper type of db column for given field type.
Accepts required and optional parameters that each column type accepts.
**Arguments**:
- `kwargs` (`Any`): key, value pairs of sqlalchemy options
**Returns**:
`sqlalchemy Column`: initialized column with proper options

View File

@ -1,118 +0,0 @@
Contains documentation of the `ormar` internal API.
Note that this is a technical part of the documentation intended for `ormar` contributors.
!!!note
For completeness as of now even the internal and special methods are documented and exposed in API docs.
!!!warning
The current API docs version is a beta and not all methods are documented,
also some of redundant items are included since it was partially auto generated.
!!!danger
Ormar is still under development, and the **internals can change at any moment**.
You shouldn't rely even on the "public" methods if they are not documented in the
normal part of the docs.
## High level overview
Ormar is divided into packages for maintainability and ease of development.
Below you can find a short description of the structure of the whole project and
individual packages.
### Models
Contains the actual `ormar.Model` class, which is based on:
* `ormar.NewBaseModel` which in turns:
* inherits from `pydantic.BaseModel`,
* uses `ormar.ModelMetaclass` for all heavy lifting, relations declaration,
parsing `ormar` fields, creating `sqlalchemy` columns and tables etc.
* There is a lot of tasks during class creation so `ormar` is using a lot of
`helpers` methods separated by functionality: `pydantic`, `sqlachemy`,
`relations` & `models` located in `helpers` submodule.
* inherits from `ormar.ModelTableProxy` that combines `Mixins` providing a special
additional behavior for `ormar.Models`
* `AliasMixin` - handling of column aliases, which are names changed only in db
* `ExcludableMixin` - handling excluding and including fields in dict() and database calls
* `MergeModelMixin` - handling merging Models initialized from raw sql raws into Models that needs to be merged,
in example parent models in join query that duplicates in raw response.
* `PrefetchQueryMixin` - handling resolving relations and ids of models to extract during issuing
subsequent queries in prefetch_related
* `RelationMixin` - handling resolving relations names, related fields etc.
* `SavePrepareMixin` - handling converting related models to their pk values, translating ormar field
names into aliases etc.
### Fields
Contains `ormar.BaseField` that is a base for all fields.
All basic types are declared in `model_fields`, while relation fields are located in:
* `foreign_key`: `ForeignKey` relation, expanding relations meaning initializing nested models,
creating dummy models with pk only that skips validation etc.
* `many_to_many`: `ManyToMany` relation that do not have a lot of logic on its own.
Related to fields is a `@property_field` decorator that is located in `decorators.property_field`.
There is also a special UUID field declaration for `sqlalchemy` that is based on `CHAR` field type.
### Query Set
Package that handles almost all interactions with db (some small parts are in `ormar.Model` and in `ormar.QuerysetProxy`).
Provides a `QuerySet` that is exposed on each Model as `objects` property.
Have a vast number of methods to query, filter, create, update and delete database rows.
* Actual construction of the queries is delegated to `Query` class
* which in tern uses `SqlJoin` to construct joins
* `Clause` to convert `filter` and `exclude` conditions into sql
* `FilterQuery` to apply filter clauses on query
* `OrderQuery` to apply order by clauses on query
* `LimitQuery` to apply limit clause on query
* `OffsetQuery` to apply offset clause on query
* For prefetch_related the same is done by `PrefetchQuery`
* Common helpers functions are extracted into `utils`
### Relations
Handles registering relations, adding/removing to relations as well as returning the
actual related models instead of relation fields declared on Models.
* Each `ormar.Model` has its own `RelationManager` registered under `_orm` property.
* `RelationManager` handles `Relations` between two different models
* In case of reverse relations or m2m relations the `RelationProxy` is used which
is basically a list with some special methods that keeps a reference to a list of related models
* Also, for reverse relations and m2m relations `QuerySetProxy` is exposed, that is
used to query the already pre-filtered related models and handles Through models
instances for m2m relations, while delegating actual queries to `QuerySet`
* `AliasManager` handles registration of aliases for relations that are used in queries.
In order to be able to link multiple times to the same table in one query each link
has to have unique alias to properly identify columns and extract proper values.
Kind of global registry, aliases are randomly generated, so might differ on each run.
* Common helpers functions are extracted into `utils`
### Signals
Handles sending signals on particular events.
* `SignalEmitter` is registered on each `ormar.Model`, that allows to register any number of
receiver functions that will be notified on each event.
* For now only combination of (pre, post) (save, update, delete) events are pre populated for user
although it's easy to register user `Signal`s.
* set of decorators is prepared, each corresponding to one of the builtin signals,
that can be used to mark functions/methods that should become receivers, those decorators
are located in `decorators.signals`.
* You can register same function to different `ormar.Models` but each Model has it's own
Emitter that is independednt and issued on events for given Model.
* Currently, there is no way to register global `Signal` triggered for all models.
### Exceptions
Gathers all exceptions specific to `ormar`.
All `ormar` exceptions inherit from `AsyncOrmException`.

View File

@ -1,62 +0,0 @@
<a name="models.descriptors.descriptors"></a>
# models.descriptors.descriptors
<a name="models.descriptors.descriptors.PydanticDescriptor"></a>
## PydanticDescriptor Objects
```python
class PydanticDescriptor()
```
Pydantic descriptor simply delegates everything to pydantic model
<a name="models.descriptors.descriptors.JsonDescriptor"></a>
## JsonDescriptor Objects
```python
class JsonDescriptor()
```
Json descriptor dumps/loads strings to actual data on write/read
<a name="models.descriptors.descriptors.BytesDescriptor"></a>
## BytesDescriptor Objects
```python
class BytesDescriptor()
```
Bytes descriptor converts strings to bytes on write and converts bytes to str
if represent_as_base64_str flag is set, so the value can be dumped to json
<a name="models.descriptors.descriptors.PkDescriptor"></a>
## PkDescriptor Objects
```python
class PkDescriptor()
```
As of now it's basically a copy of PydanticDescriptor but that will
change in the future with multi column primary keys
<a name="models.descriptors.descriptors.RelationDescriptor"></a>
## RelationDescriptor Objects
```python
class RelationDescriptor()
```
Relation descriptor expands the relation to initialize the related model
before setting it to __dict__. Note that expanding also registers the
related model in RelationManager.
<a name="models.descriptors.descriptors.PropertyDescriptor"></a>
## PropertyDescriptor Objects
```python
class PropertyDescriptor()
```
Property descriptor handles methods decorated with @property_field decorator.
They are read only.

View File

@ -1,197 +0,0 @@
<a name="models.excludable"></a>
# models.excludable
<a name="models.excludable.Excludable"></a>
## Excludable Objects
```python
@dataclass
class Excludable()
```
Class that keeps sets of fields to exclude and include
<a name="models.excludable.Excludable.get_copy"></a>
#### get\_copy
```python
| get_copy() -> "Excludable"
```
Return copy of self to avoid in place modifications
**Returns**:
`ormar.models.excludable.Excludable`: copy of self with copied sets
<a name="models.excludable.Excludable.set_values"></a>
#### set\_values
```python
| set_values(value: Set, is_exclude: bool) -> None
```
Appends the data to include/exclude sets.
**Arguments**:
- `value` (`set`): set of values to add
- `is_exclude` (`bool`): flag if values are to be excluded or included
<a name="models.excludable.Excludable.is_included"></a>
#### is\_included
```python
| is_included(key: str) -> bool
```
Check if field in included (in set or set is {...})
**Arguments**:
- `key` (`str`): key to check
**Returns**:
`bool`: result of the check
<a name="models.excludable.Excludable.is_excluded"></a>
#### is\_excluded
```python
| is_excluded(key: str) -> bool
```
Check if field in excluded (in set or set is {...})
**Arguments**:
- `key` (`str`): key to check
**Returns**:
`bool`: result of the check
<a name="models.excludable.ExcludableItems"></a>
## ExcludableItems Objects
```python
class ExcludableItems()
```
Keeps a dictionary of Excludables by alias + model_name keys
to allow quick lookup by nested models without need to travers
deeply nested dictionaries and passing include/exclude around
<a name="models.excludable.ExcludableItems.from_excludable"></a>
#### from\_excludable
```python
| @classmethod
| from_excludable(cls, other: "ExcludableItems") -> "ExcludableItems"
```
Copy passed ExcludableItems to avoid inplace modifications.
**Arguments**:
- `other` (`ormar.models.excludable.ExcludableItems`): other excludable items to be copied
**Returns**:
`ormar.models.excludable.ExcludableItems`: copy of other
<a name="models.excludable.ExcludableItems.include_entry_count"></a>
#### include\_entry\_count
```python
| include_entry_count() -> int
```
Returns count of include items inside
<a name="models.excludable.ExcludableItems.get"></a>
#### get
```python
| get(model_cls: Type["Model"], alias: str = "") -> Excludable
```
Return Excludable for given model and alias.
**Arguments**:
- `model_cls` (`ormar.models.metaclass.ModelMetaclass`): target model to check
- `alias` (`str`): table alias from relation manager
**Returns**:
`ormar.models.excludable.Excludable`: Excludable for given model and alias
<a name="models.excludable.ExcludableItems.build"></a>
#### build
```python
| build(items: Union[List[str], str, Tuple[str], Set[str], Dict], model_cls: Type["Model"], is_exclude: bool = False) -> None
```
Receives the one of the types of items and parses them as to achieve
a end situation with one excludable per alias/model in relation.
Each excludable has two sets of values - one to include, one to exclude.
**Arguments**:
- `items` (`Union[List[str], str, Tuple[str], Set[str], Dict]`): values to be included or excluded
- `model_cls` (`ormar.models.metaclass.ModelMetaclass`): source model from which relations are constructed
- `is_exclude` (`bool`): flag if items should be included or excluded
<a name="models.excludable.ExcludableItems._set_excludes"></a>
#### \_set\_excludes
```python
| _set_excludes(items: Set, model_name: str, is_exclude: bool, alias: str = "") -> None
```
Sets set of values to be included or excluded for given key and model.
**Arguments**:
- `items` (`set`): items to include/exclude
- `model_name` (`str`): name of model to construct key
- `is_exclude` (`bool`): flag if values should be included or excluded
- `alias` (`str`):
<a name="models.excludable.ExcludableItems._traverse_dict"></a>
#### \_traverse\_dict
```python
| _traverse_dict(values: Dict, source_model: Type["Model"], model_cls: Type["Model"], is_exclude: bool, related_items: List = None, alias: str = "") -> None
```
Goes through dict of nested values and construct/update Excludables.
**Arguments**:
- `values` (`Dict`): items to include/exclude
- `source_model` (`ormar.models.metaclass.ModelMetaclass`): source model from which relations are constructed
- `model_cls` (`ormar.models.metaclass.ModelMetaclass`): model from which current relation is constructed
- `is_exclude` (`bool`): flag if values should be included or excluded
- `related_items` (`List`): list of names of related fields chain
- `alias` (`str`): alias of relation
<a name="models.excludable.ExcludableItems._traverse_list"></a>
#### \_traverse\_list
```python
| _traverse_list(values: Set[str], model_cls: Type["Model"], is_exclude: bool) -> None
```
Goes through list of values and construct/update Excludables.
**Arguments**:
- `values` (`set`): items to include/exclude
- `model_cls` (`ormar.models.metaclass.ModelMetaclass`): model from which current relation is constructed
- `is_exclude` (`bool`): flag if values should be included or excluded

View File

@ -1,134 +0,0 @@
<a name="models.helpers.models"></a>
# models.helpers.models
<a name="models.helpers.models.is_field_an_forward_ref"></a>
#### is\_field\_an\_forward\_ref
```python
is_field_an_forward_ref(field: "BaseField") -> bool
```
Checks if field is a relation field and whether any of the referenced models
are ForwardRefs that needs to be updated before proceeding.
**Arguments**:
- `field` (`Type[BaseField]`): model field to verify
**Returns**:
`bool`: result of the check
<a name="models.helpers.models.populate_default_options_values"></a>
#### populate\_default\_options\_values
```python
populate_default_options_values(new_model: Type["Model"], model_fields: Dict) -> None
```
Sets all optional Meta values to it's defaults
and set model_fields that were already previously extracted.
Here should live all options that are not overwritten/set for all models.
Current options are:
* constraints = []
* abstract = False
**Arguments**:
- `new_model` (`Model class`): newly constructed Model
- `model_fields` (`Union[Dict[str, type], Dict]`): dict of model fields
<a name="models.helpers.models.substitue_backend_pool_for_sqlite"></a>
#### substitue\_backend\_pool\_for\_sqlite
```python
substitue_backend_pool_for_sqlite(new_model: Type["Model"]) -> None
```
Recreates Connection pool for sqlite3 with new factory that
executes "PRAGMA foreign_keys=1; on initialization to enable foreign keys.
**Arguments**:
- `new_model` (`Model class`): newly declared ormar Model
<a name="models.helpers.models.check_required_meta_parameters"></a>
#### check\_required\_meta\_parameters
```python
check_required_meta_parameters(new_model: Type["Model"]) -> None
```
Verifies if ormar.Model has database and metadata set.
Recreates Connection pool for sqlite3
**Arguments**:
- `new_model` (`Model class`): newly declared ormar Model
<a name="models.helpers.models.extract_annotations_and_default_vals"></a>
#### extract\_annotations\_and\_default\_vals
```python
extract_annotations_and_default_vals(attrs: Dict) -> Tuple[Dict, Dict]
```
Extracts annotations from class namespace dict and triggers
extraction of ormar model_fields.
**Arguments**:
- `attrs` (`Dict`): namespace of the class created
**Returns**:
`Tuple[Dict, Dict]`: namespace of the class updated, dict of extracted model_fields
<a name="models.helpers.models.group_related_list"></a>
#### group\_related\_list
```python
group_related_list(list_: List) -> collections.OrderedDict
```
Translates the list of related strings into a dictionary.
That way nested models are grouped to traverse them in a right order
and to avoid repetition.
Sample: ["people__houses", "people__cars__models", "people__cars__colors"]
will become:
{'people': {'houses': [], 'cars': ['models', 'colors']}}
Result dictionary is sorted by length of the values and by key
**Arguments**:
- `list_` (`List[str]`): list of related models used in select related
**Returns**:
`Dict[str, List]`: list converted to dictionary to avoid repetition and group nested models
<a name="models.helpers.models.meta_field_not_set"></a>
#### meta\_field\_not\_set
```python
meta_field_not_set(model: Type["Model"], field_name: str) -> bool
```
Checks if field with given name is already present in model.Meta.
Then check if it's set to something truthful
(in practice meaning not None, as it's non or ormar Field only).
**Arguments**:
- `model` (`Model class`): newly constructed model
- `field_name` (`str`): name of the ormar field
**Returns**:
`bool`: result of the check

View File

@ -1,109 +0,0 @@
<a name="models.helpers.pydantic"></a>
# models.helpers.pydantic
<a name="models.helpers.pydantic.create_pydantic_field"></a>
#### create\_pydantic\_field
```python
create_pydantic_field(field_name: str, model: Type["Model"], model_field: "ManyToManyField") -> None
```
Registers pydantic field on through model that leads to passed model
and is registered as field_name passed.
Through model is fetched from through attributed on passed model_field.
**Arguments**:
- `field_name` (`str`): field name to register
- `model` (`Model class`): type of field to register
- `model_field` (`ManyToManyField class`): relation field from which through model is extracted
<a name="models.helpers.pydantic.get_pydantic_field"></a>
#### get\_pydantic\_field
```python
get_pydantic_field(field_name: str, model: Type["Model"]) -> "ModelField"
```
Extracts field type and if it's required from Model model_fields by passed
field_name. Returns a pydantic field with type of field_name field type.
**Arguments**:
- `field_name` (`str`): field name to fetch from Model and name of pydantic field
- `model` (`Model class`): type of field to register
**Returns**:
`pydantic.ModelField`: newly created pydantic field
<a name="models.helpers.pydantic.populate_pydantic_default_values"></a>
#### populate\_pydantic\_default\_values
```python
populate_pydantic_default_values(attrs: Dict) -> Tuple[Dict, Dict]
```
Extracts ormar fields from annotations (deprecated) and from namespace
dictionary of the class. Fields declared on model are all subclasses of the
BaseField class.
Trigger conversion of ormar field into pydantic FieldInfo, which has all needed
parameters saved.
Overwrites the annotations of ormar fields to corresponding types declared on
ormar fields (constructed dynamically for relations).
Those annotations are later used by pydantic to construct it's own fields.
**Arguments**:
- `attrs` (`Dict`): current class namespace
**Returns**:
`Tuple[Dict, Dict]`: namespace of the class updated, dict of extracted model_fields
<a name="models.helpers.pydantic.get_pydantic_base_orm_config"></a>
#### get\_pydantic\_base\_orm\_config
```python
get_pydantic_base_orm_config() -> Type[pydantic.BaseConfig]
```
Returns empty pydantic Config with orm_mode set to True.
**Returns**:
`pydantic Config`: empty default config with orm_mode set.
<a name="models.helpers.pydantic.get_potential_fields"></a>
#### get\_potential\_fields
```python
get_potential_fields(attrs: Dict) -> Dict
```
Gets all the fields in current class namespace that are Fields.
**Arguments**:
- `attrs` (`Dict`): current class namespace
**Returns**:
`Dict`: extracted fields that are ormar Fields
<a name="models.helpers.pydantic.remove_excluded_parent_fields"></a>
#### remove\_excluded\_parent\_fields
```python
remove_excluded_parent_fields(model: Type["Model"]) -> None
```
Removes pydantic fields that should be excluded from parent models
**Arguments**:
- `model` (`Type["Model"]`):

View File

@ -1,25 +0,0 @@
<a name="models.helpers.related_names_validation"></a>
# models.helpers.related\_names\_validation
<a name="models.helpers.related_names_validation.validate_related_names_in_relations"></a>
#### validate\_related\_names\_in\_relations
```python
validate_related_names_in_relations(model_fields: Dict, new_model: Type["Model"]) -> None
```
Performs a validation of relation_names in relation fields.
If multiple fields are leading to the same related model
only one can have empty related_name param
(populated by default as model.name.lower()+'s').
Also related_names have to be unique for given related model.
**Raises**:
- `ModelDefinitionError`: if validation of related_names fail
**Arguments**:
- `model_fields` (`Dict[str, ormar.Field]`): dictionary of declared ormar model fields
- `new_model` (`Model class`):

View File

@ -1,170 +0,0 @@
<a name="models.helpers.relations"></a>
# models.helpers.relations
<a name="models.helpers.relations.register_relation_on_build"></a>
#### register\_relation\_on\_build
```python
register_relation_on_build(field: "ForeignKeyField") -> None
```
Registers ForeignKey relation in alias_manager to set a table_prefix.
Registration include also reverse relation side to be able to join both sides.
Relation is registered by model name and relation field name to allow for multiple
relations between two Models that needs to have different
aliases for proper sql joins.
**Arguments**:
- `field` (`ForeignKey class`): relation field
<a name="models.helpers.relations.register_many_to_many_relation_on_build"></a>
#### register\_many\_to\_many\_relation\_on\_build
```python
register_many_to_many_relation_on_build(field: "ManyToManyField") -> None
```
Registers connection between through model and both sides of the m2m relation.
Registration include also reverse relation side to be able to join both sides.
Relation is registered by model name and relation field name to allow for multiple
relations between two Models that needs to have different
aliases for proper sql joins.
By default relation name is a model.name.lower().
**Arguments**:
- `field` (`ManyToManyField class`): relation field
<a name="models.helpers.relations.expand_reverse_relationship"></a>
#### expand\_reverse\_relationship
```python
expand_reverse_relationship(model_field: "ForeignKeyField") -> None
```
If the reverse relation has not been set before it's set here.
**Arguments**:
- `model_field`:
**Returns**:
`None`: None
<a name="models.helpers.relations.expand_reverse_relationships"></a>
#### expand\_reverse\_relationships
```python
expand_reverse_relationships(model: Type["Model"]) -> None
```
Iterates through model_fields of given model and verifies if all reverse
relation have been populated on related models.
If the reverse relation has not been set before it's set here.
**Arguments**:
- `model` (`Model class`): model on which relation should be checked and registered
<a name="models.helpers.relations.register_reverse_model_fields"></a>
#### register\_reverse\_model\_fields
```python
register_reverse_model_fields(model_field: "ForeignKeyField") -> None
```
Registers reverse ForeignKey field on related model.
By default it's name.lower()+'s' of the model on which relation is defined.
But if the related_model name is provided it's registered with that name.
Autogenerated reverse fields also set related_name to the original field name.
**Arguments**:
- `model_field` (`relation Field`): original relation ForeignKey field
<a name="models.helpers.relations.register_through_shortcut_fields"></a>
#### register\_through\_shortcut\_fields
```python
register_through_shortcut_fields(model_field: "ManyToManyField") -> None
```
Registers m2m relation through shortcut on both ends of the relation.
**Arguments**:
- `model_field` (`ManyToManyField`): relation field defined in parent model
<a name="models.helpers.relations.register_relation_in_alias_manager"></a>
#### register\_relation\_in\_alias\_manager
```python
register_relation_in_alias_manager(field: "ForeignKeyField") -> None
```
Registers the relation (and reverse relation) in alias manager.
The m2m relations require registration of through model between
actual end models of the relation.
Delegates the actual registration to:
m2m - register_many_to_many_relation_on_build
fk - register_relation_on_build
**Arguments**:
- `field` (`ForeignKey or ManyToManyField class`): relation field
<a name="models.helpers.relations.verify_related_name_dont_duplicate"></a>
#### verify\_related\_name\_dont\_duplicate
```python
verify_related_name_dont_duplicate(related_name: str, model_field: "ForeignKeyField") -> None
```
Verifies whether the used related_name (regardless of the fact if user defined or
auto generated) is already used on related model, but is connected with other model
than the one that we connect right now.
**Raises**:
- `ModelDefinitionError`: if name is already used but lead to different related
model
**Arguments**:
- `related_name`:
- `model_field` (`relation Field`): original relation ForeignKey field
**Returns**:
`None`: None
<a name="models.helpers.relations.reverse_field_not_already_registered"></a>
#### reverse\_field\_not\_already\_registered
```python
reverse_field_not_already_registered(model_field: "ForeignKeyField") -> bool
```
Checks if child is already registered in parents pydantic fields.
**Raises**:
- `ModelDefinitionError`: if related name is already used but lead to different
related model
**Arguments**:
- `model_field` (`relation Field`): original relation ForeignKey field
**Returns**:
`bool`: result of the check

View File

@ -1,247 +0,0 @@
<a name="models.helpers.sqlalchemy"></a>
# models.helpers.sqlalchemy
<a name="models.helpers.sqlalchemy.adjust_through_many_to_many_model"></a>
#### adjust\_through\_many\_to\_many\_model
```python
adjust_through_many_to_many_model(model_field: "ManyToManyField") -> None
```
Registers m2m relation on through model.
Sets ormar.ForeignKey from through model to both child and parent models.
Sets sqlalchemy.ForeignKey to both child and parent models.
Sets pydantic fields with child and parent model types.
**Arguments**:
- `model_field` (`ManyToManyField`): relation field defined in parent model
<a name="models.helpers.sqlalchemy.create_and_append_m2m_fk"></a>
#### create\_and\_append\_m2m\_fk
```python
create_and_append_m2m_fk(model: Type["Model"], model_field: "ManyToManyField", field_name: str) -> None
```
Registers sqlalchemy Column with sqlalchemy.ForeignKey leading to the model.
Newly created field is added to m2m relation through model Meta columns and table.
**Arguments**:
- `field_name` (`str`): name of the column to create
- `model` (`Model class`): Model class to which FK should be created
- `model_field` (`ManyToManyField field`): field with ManyToMany relation
<a name="models.helpers.sqlalchemy.check_pk_column_validity"></a>
#### check\_pk\_column\_validity
```python
check_pk_column_validity(field_name: str, field: "BaseField", pkname: Optional[str]) -> Optional[str]
```
Receives the field marked as primary key and verifies if the pkname
was not already set (only one allowed per model) and if field is not marked
as pydantic_only as it needs to be a database field.
**Raises**:
- `ModelDefintionError`: if pkname already set or field is pydantic_only
**Arguments**:
- `field_name` (`str`): name of field
- `field` (`BaseField`): ormar.Field
- `pkname` (`Optional[str]`): already set pkname
**Returns**:
`str`: name of the field that should be set as pkname
<a name="models.helpers.sqlalchemy.sqlalchemy_columns_from_model_fields"></a>
#### sqlalchemy\_columns\_from\_model\_fields
```python
sqlalchemy_columns_from_model_fields(model_fields: Dict, new_model: Type["Model"]) -> Tuple[Optional[str], List[sqlalchemy.Column]]
```
Iterates over declared on Model model fields and extracts fields that
should be treated as database fields.
If the model is empty it sets mandatory id field as primary key
(used in through models in m2m relations).
Triggers a validation of relation_names in relation fields. If multiple fields
are leading to the same related model only one can have empty related_name param.
Also related_names have to be unique.
Trigger validation of primary_key - only one and required pk can be set,
cannot be pydantic_only.
Append fields to columns if it's not pydantic_only,
virtual ForeignKey or ManyToMany field.
Sets `owner` on each model_field as reference to newly created Model.
**Raises**:
- `ModelDefinitionError`: if validation of related_names fail,
or pkname validation fails.
**Arguments**:
- `model_fields` (`Dict[str, ormar.Field]`): dictionary of declared ormar model fields
- `new_model` (`Model class`):
**Returns**:
`Tuple[Optional[str], List[sqlalchemy.Column]]`: pkname, list of sqlalchemy columns
<a name="models.helpers.sqlalchemy._process_fields"></a>
#### \_process\_fields
```python
_process_fields(model_fields: Dict, new_model: Type["Model"]) -> Tuple[Optional[str], List[sqlalchemy.Column]]
```
Helper method.
Populates pkname and columns.
Trigger validation of primary_key - only one and required pk can be set,
cannot be pydantic_only.
Append fields to columns if it's not pydantic_only,
virtual ForeignKey or ManyToMany field.
Sets `owner` on each model_field as reference to newly created Model.
**Raises**:
- `ModelDefinitionError`: if validation of related_names fail,
or pkname validation fails.
**Arguments**:
- `model_fields` (`Dict[str, ormar.Field]`): dictionary of declared ormar model fields
- `new_model` (`Model class`):
**Returns**:
`Tuple[Optional[str], List[sqlalchemy.Column]]`: pkname, list of sqlalchemy columns
<a name="models.helpers.sqlalchemy._is_through_model_not_set"></a>
#### \_is\_through\_model\_not\_set
```python
_is_through_model_not_set(field: "BaseField") -> bool
```
Alias to if check that verifies if through model was created.
**Arguments**:
- `field` (`"BaseField"`): field to check
**Returns**:
`bool`: result of the check
<a name="models.helpers.sqlalchemy._is_db_field"></a>
#### \_is\_db\_field
```python
_is_db_field(field: "BaseField") -> bool
```
Alias to if check that verifies if field should be included in database.
**Arguments**:
- `field` (`"BaseField"`): field to check
**Returns**:
`bool`: result of the check
<a name="models.helpers.sqlalchemy.populate_meta_tablename_columns_and_pk"></a>
#### populate\_meta\_tablename\_columns\_and\_pk
```python
populate_meta_tablename_columns_and_pk(name: str, new_model: Type["Model"]) -> Type["Model"]
```
Sets Model tablename if it's not already set in Meta.
Default tablename if not present is class name lower + s (i.e. Bed becomes -> beds)
Checks if Model's Meta have pkname and columns set.
If not calls the sqlalchemy_columns_from_model_fields to populate
columns from ormar.fields definitions.
**Raises**:
- `ModelDefinitionError`: if pkname is not present raises ModelDefinitionError.
Each model has to have pk.
**Arguments**:
- `name` (`str`): name of the current Model
- `new_model` (`ormar.models.metaclass.ModelMetaclass`): currently constructed Model
**Returns**:
`ormar.models.metaclass.ModelMetaclass`: Model with populated pkname and columns in Meta
<a name="models.helpers.sqlalchemy.check_for_null_type_columns_from_forward_refs"></a>
#### check\_for\_null\_type\_columns\_from\_forward\_refs
```python
check_for_null_type_columns_from_forward_refs(meta: "ModelMeta") -> bool
```
Check is any column is of NUllType() meaning it's empty column from ForwardRef
**Arguments**:
- `meta` (`Model class Meta`): Meta class of the Model without sqlalchemy table constructed
**Returns**:
`bool`: result of the check
<a name="models.helpers.sqlalchemy.populate_meta_sqlalchemy_table_if_required"></a>
#### populate\_meta\_sqlalchemy\_table\_if\_required
```python
populate_meta_sqlalchemy_table_if_required(meta: "ModelMeta") -> None
```
Constructs sqlalchemy table out of columns and parameters set on Meta class.
It populates name, metadata, columns and constraints.
**Arguments**:
- `meta` (`Model class Meta`): Meta class of the Model without sqlalchemy table constructed
**Returns**:
`Model class`: class with populated Meta.table
<a name="models.helpers.sqlalchemy.update_column_definition"></a>
#### update\_column\_definition
```python
update_column_definition(model: Union[Type["Model"], Type["NewBaseModel"]], field: "ForeignKeyField") -> None
```
Updates a column with a new type column based on updated parameters in FK fields.
**Arguments**:
- `model` (`Type["Model"]`): model on which columns needs to be updated
- `field` (`ForeignKeyField`): field with column definition that requires update
**Returns**:
`None`: None

View File

@ -1,257 +0,0 @@
<a name="models.helpers.validation"></a>
# models.helpers.validation
<a name="models.helpers.validation.check_if_field_has_choices"></a>
#### check\_if\_field\_has\_choices
```python
check_if_field_has_choices(field: BaseField) -> bool
```
Checks if given field has choices populated.
A if it has one, a validator for this field needs to be attached.
**Arguments**:
- `field` (`BaseField`): ormar field to check
**Returns**:
`bool`: result of the check
<a name="models.helpers.validation.convert_choices_if_needed"></a>
#### convert\_choices\_if\_needed
```python
convert_choices_if_needed(field: "BaseField", value: Any) -> Tuple[Any, List]
```
Converts dates to isoformat as fastapi can check this condition in routes
and the fields are not yet parsed.
Converts enums to list of it's values.
Converts uuids to strings.
Converts decimal to float with given scale.
**Arguments**:
- `field` (`BaseField`): ormar field to check with choices
- `values` (`Dict`): current values of the model to verify
**Returns**:
`Tuple[Any, List]`: value, choices list
<a name="models.helpers.validation.validate_choices"></a>
#### validate\_choices
```python
validate_choices(field: "BaseField", value: Any) -> None
```
Validates if given value is in provided choices.
**Raises**:
- `ValueError`: If value is not in choices.
**Arguments**:
- `field` (`BaseField`): field to validate
- `value` (`Any`): value of the field
<a name="models.helpers.validation.choices_validator"></a>
#### choices\_validator
```python
choices_validator(cls: Type["Model"], values: Dict[str, Any]) -> Dict[str, Any]
```
Validator that is attached to pydantic model pre root validators.
Validator checks if field value is in field.choices list.
**Raises**:
- `ValueError`: if field value is outside of allowed choices.
**Arguments**:
- `cls` (`Model class`): constructed class
- `values` (`Dict[str, Any]`): dictionary of field values (pydantic side)
**Returns**:
`Dict[str, Any]`: values if pass validation, otherwise exception is raised
<a name="models.helpers.validation.generate_model_example"></a>
#### generate\_model\_example
```python
generate_model_example(model: Type["Model"], relation_map: Dict = None) -> Dict
```
Generates example to be included in schema in fastapi.
**Arguments**:
- `model` (`Type["Model"]`): ormar.Model
- `relation_map` (`Optional[Dict]`): dict with relations to follow
**Returns**:
`Dict[str, int]`: dict with example values
<a name="models.helpers.validation.populates_sample_fields_values"></a>
#### populates\_sample\_fields\_values
```python
populates_sample_fields_values(example: Dict[str, Any], name: str, field: BaseField, relation_map: Dict = None) -> None
```
Iterates the field and sets fields to sample values
**Arguments**:
- `field` (`BaseField`): ormar field
- `name` (`str`): name of the field
- `example` (`Dict[str, Any]`): example dict
- `relation_map` (`Optional[Dict]`): dict with relations to follow
<a name="models.helpers.validation.get_nested_model_example"></a>
#### get\_nested\_model\_example
```python
get_nested_model_example(name: str, field: "BaseField", relation_map: Dict) -> Union[List, Dict]
```
Gets representation of nested model.
**Arguments**:
- `name` (`str`): name of the field to follow
- `field` (`BaseField`): ormar field
- `relation_map` (`Dict`): dict with relation map
**Returns**:
`Union[List, Dict]`: nested model or list of nested model repr
<a name="models.helpers.validation.generate_pydantic_example"></a>
#### generate\_pydantic\_example
```python
generate_pydantic_example(pydantic_model: Type[pydantic.BaseModel], exclude: Set = None) -> Dict
```
Generates dict with example.
**Arguments**:
- `pydantic_model` (`Type[pydantic.BaseModel]`): model to parse
- `exclude` (`Optional[Set]`): list of fields to exclude
**Returns**:
`Dict`: dict with fields and sample values
<a name="models.helpers.validation.get_pydantic_example_repr"></a>
#### get\_pydantic\_example\_repr
```python
get_pydantic_example_repr(type_: Any) -> Any
```
Gets sample representation of pydantic field for example dict.
**Arguments**:
- `type_` (`Any`): type of pydantic field
**Returns**:
`Any`: representation to include in example
<a name="models.helpers.validation.overwrite_example_and_description"></a>
#### overwrite\_example\_and\_description
```python
overwrite_example_and_description(schema: Dict[str, Any], model: Type["Model"]) -> None
```
Overwrites the example with properly nested children models.
Overwrites the description if it's taken from ormar.Model.
**Arguments**:
- `schema` (`Dict[str, Any]`): schema of current model
- `model` (`Type["Model"]`): model class
<a name="models.helpers.validation.overwrite_binary_format"></a>
#### overwrite\_binary\_format
```python
overwrite_binary_format(schema: Dict[str, Any], model: Type["Model"]) -> None
```
Overwrites format of the field if it's a LargeBinary field with
a flag to represent the field as base64 encoded string.
**Arguments**:
- `schema` (`Dict[str, Any]`): schema of current model
- `model` (`Type["Model"]`): model class
<a name="models.helpers.validation.construct_modify_schema_function"></a>
#### construct\_modify\_schema\_function
```python
construct_modify_schema_function(fields_with_choices: List) -> SchemaExtraCallable
```
Modifies the schema to include fields with choices validator.
Those fields will be displayed in schema as Enum types with available choices
values listed next to them.
Note that schema extra has to be a function, otherwise it's called to soon
before all the relations are expanded.
**Arguments**:
- `fields_with_choices` (`List`): list of fields with choices validation
**Returns**:
`Callable`: callable that will be run by pydantic to modify the schema
<a name="models.helpers.validation.construct_schema_function_without_choices"></a>
#### construct\_schema\_function\_without\_choices
```python
construct_schema_function_without_choices() -> SchemaExtraCallable
```
Modifies model example and description if needed.
Note that schema extra has to be a function, otherwise it's called to soon
before all the relations are expanded.
**Returns**:
`Callable`: callable that will be run by pydantic to modify the schema
<a name="models.helpers.validation.populate_choices_validators"></a>
#### populate\_choices\_validators
```python
populate_choices_validators(model: Type["Model"]) -> None
```
Checks if Model has any fields with choices set.
If yes it adds choices validation into pre root validators.
**Arguments**:
- `model` (`Model class`): newly constructed Model

View File

@ -1,90 +0,0 @@
<a name="models.mixins.alias_mixin"></a>
# models.mixins.alias\_mixin
<a name="models.mixins.alias_mixin.AliasMixin"></a>
## AliasMixin Objects
```python
class AliasMixin()
```
Used to translate field names into database column names.
<a name="models.mixins.alias_mixin.AliasMixin.get_column_alias"></a>
#### get\_column\_alias
```python
| @classmethod
| get_column_alias(cls, field_name: str) -> str
```
Returns db alias (column name in db) for given ormar field.
For fields without alias field name is returned.
**Arguments**:
- `field_name` (`str`): name of the field to get alias from
**Returns**:
`str`: alias (db name) if set, otherwise passed name
<a name="models.mixins.alias_mixin.AliasMixin.get_column_name_from_alias"></a>
#### get\_column\_name\_from\_alias
```python
| @classmethod
| get_column_name_from_alias(cls, alias: str) -> str
```
Returns ormar field name for given db alias (column name in db).
If field do not have alias it's returned as is.
**Arguments**:
- `alias` (`str`):
**Returns**:
`str`: field name if set, otherwise passed alias (db name)
<a name="models.mixins.alias_mixin.AliasMixin.translate_columns_to_aliases"></a>
#### translate\_columns\_to\_aliases
```python
| @classmethod
| translate_columns_to_aliases(cls, new_kwargs: Dict) -> Dict
```
Translates dictionary of model fields changing field names into aliases.
If field has no alias the field name remains intact.
Only fields present in the dictionary are translated.
**Arguments**:
- `new_kwargs` (`Dict`): dict with fields names and their values
**Returns**:
`Dict`: dict with aliases and their values
<a name="models.mixins.alias_mixin.AliasMixin.translate_aliases_to_columns"></a>
#### translate\_aliases\_to\_columns
```python
| @classmethod
| translate_aliases_to_columns(cls, new_kwargs: Dict) -> Dict
```
Translates dictionary of model fields changing aliases into field names.
If field has no alias the alias is already a field name.
Only fields present in the dictionary are translated.
**Arguments**:
- `new_kwargs` (`Dict`): dict with aliases and their values
**Returns**:
`Dict`: dict with fields names and their values

View File

@ -1,149 +0,0 @@
<a name="models.mixins.excludable_mixin"></a>
# models.mixins.excludable\_mixin
<a name="models.mixins.excludable_mixin.ExcludableMixin"></a>
## ExcludableMixin Objects
```python
class ExcludableMixin(RelationMixin)
```
Used to include/exclude given set of fields on models during load and dict() calls.
<a name="models.mixins.excludable_mixin.ExcludableMixin.get_child"></a>
#### get\_child
```python
| @staticmethod
| get_child(items: Union[Set, Dict, None], key: str = None) -> Union[Set, Dict, None]
```
Used to get nested dictionaries keys if they exists otherwise returns
passed items.
**Arguments**:
- `items` (`Union[Set, Dict, None]`): bag of items to include or exclude
- `key` (`str`): name of the child to extract
**Returns**:
`Union[Set, Dict, None]`: child extracted from items if exists
<a name="models.mixins.excludable_mixin.ExcludableMixin._populate_pk_column"></a>
#### \_populate\_pk\_column
```python
| @staticmethod
| _populate_pk_column(model: Union[Type["Model"], Type["ModelRow"]], columns: List[str], use_alias: bool = False) -> List[str]
```
Adds primary key column/alias (depends on use_alias flag) to list of
column names that are selected.
**Arguments**:
- `model` (`Type["Model"]`): model on columns are selected
- `columns` (`List[str]`): list of columns names
- `use_alias` (`bool`): flag to set if aliases or field names should be used
**Returns**:
`List[str]`: list of columns names with pk column in it
<a name="models.mixins.excludable_mixin.ExcludableMixin.own_table_columns"></a>
#### own\_table\_columns
```python
| @classmethod
| own_table_columns(cls, model: Union[Type["Model"], Type["ModelRow"]], excludable: ExcludableItems, alias: str = "", use_alias: bool = False, add_pk_columns: bool = True) -> List[str]
```
Returns list of aliases or field names for given model.
Aliases/names switch is use_alias flag.
If provided only fields included in fields will be returned.
If provided fields in exclude_fields will be excluded in return.
Primary key field is always added and cannot be excluded (will be added anyway).
**Arguments**:
- `add_pk_columns` (`bool`): flag if add primary key - always yes if ormar parses data
- `alias` (`str`): relation prefix
- `excludable` (`ExcludableItems`): structure of fields to include and exclude
- `model` (`Type["Model"]`): model on columns are selected
- `use_alias` (`bool`): flag if aliases or field names should be used
**Returns**:
`List[str]`: list of column field names or aliases
<a name="models.mixins.excludable_mixin.ExcludableMixin._update_excluded_with_related"></a>
#### \_update\_excluded\_with\_related
```python
| @classmethod
| _update_excluded_with_related(cls, exclude: Union[Set, Dict, None]) -> Set
```
Used during generation of the dict().
To avoid cyclical references and max recurrence limit nested models have to
exclude related models that are not mandatory.
For a main model (not nested) only nullable related field names are added to
exclusion, for nested models all related models are excluded.
**Arguments**:
- `exclude` (`Union[Set, Dict, None]`): set/dict with fields to exclude
**Returns**:
`Union[Set, Dict]`: set or dict with excluded fields added.
<a name="models.mixins.excludable_mixin.ExcludableMixin._update_excluded_with_pks_and_through"></a>
#### \_update\_excluded\_with\_pks\_and\_through
```python
| @classmethod
| _update_excluded_with_pks_and_through(cls, exclude: Set, exclude_primary_keys: bool, exclude_through_models: bool) -> Set
```
Updates excluded names with name of pk column if exclude flag is set.
**Arguments**:
- `exclude` (`Set`): set of names to exclude
- `exclude_primary_keys` (`bool`): flag if the primary keys should be excluded
**Returns**:
`Set`: set updated with pk if flag is set
<a name="models.mixins.excludable_mixin.ExcludableMixin.get_names_to_exclude"></a>
#### get\_names\_to\_exclude
```python
| @classmethod
| get_names_to_exclude(cls, excludable: ExcludableItems, alias: str) -> Set
```
Returns a set of models field names that should be explicitly excluded
during model initialization.
Those fields will be set to None to avoid ormar/pydantic setting default
values on them. They should be returned as None in any case.
Used in parsing data from database rows that construct Models by initializing
them with dicts constructed from those db rows.
**Arguments**:
- `alias` (`str`): alias of current relation
- `excludable` (`ExcludableItems`): structure of fields to include and exclude
**Returns**:
`Set`: set of field names that should be excluded

View File

@ -1,88 +0,0 @@
<a name="models.mixins.merge_mixin"></a>
# models.mixins.merge\_mixin
<a name="models.mixins.merge_mixin.MergeModelMixin"></a>
## MergeModelMixin Objects
```python
class MergeModelMixin()
```
Used to merge models instances returned by database,
but already initialized to ormar Models.keys
Models can duplicate during joins when parent model has multiple child rows,
in the end all parent (main) models should be unique.
<a name="models.mixins.merge_mixin.MergeModelMixin.merge_instances_list"></a>
#### merge\_instances\_list
```python
| @classmethod
| merge_instances_list(cls, result_rows: List["Model"]) -> List["Model"]
```
Merges a list of models into list of unique models.
Models can duplicate during joins when parent model has multiple child rows,
in the end all parent (main) models should be unique.
**Arguments**:
populated, each instance is one row in db and some models can duplicate
- `result_rows` (`List["Model"]`): list of already initialized Models with child models
**Returns**:
`List["Model"]`: list of merged models where each main model is unique
<a name="models.mixins.merge_mixin.MergeModelMixin.merge_two_instances"></a>
#### merge\_two\_instances
```python
| @classmethod
| merge_two_instances(cls, one: "Model", other: "Model", relation_map: Dict = None) -> "Model"
```
Merges current (other) Model and previous one (one) and returns the current
Model instance with data merged from previous one.
If needed it's calling itself recurrently and merges also children models.
**Arguments**:
- `relation_map` (`Dict`): map of models relations to follow
- `one` (`Model`): previous model instance
- `other` (`Model`): current model instance
**Returns**:
`Model`: current Model instance with data merged from previous one.
<a name="models.mixins.merge_mixin.MergeModelMixin._merge_items_lists"></a>
#### \_merge\_items\_lists
```python
| @classmethod
| _merge_items_lists(cls, field_name: str, current_field: List, other_value: List, relation_map: Optional[Dict]) -> List
```
Takes two list of nested models and process them going deeper
according with the map.
If model from one's list is in other -> they are merged with relations
to follow passed from map.
If one's model is not in other it's simply appended to the list.
**Arguments**:
- `field_name` (`str`): name of the current relation field
- `current_field` (`List[Model]`): list of nested models from one model
- `other_value` (`List[Model]`): list of nested models from other model
- `relation_map` (`Dict`): map of relations to follow
**Returns**:
`List[Model]`: merged list of models

View File

@ -1,100 +0,0 @@
<a name="models.mixins.prefetch_mixin"></a>
# models.mixins.prefetch\_mixin
<a name="models.mixins.prefetch_mixin.PrefetchQueryMixin"></a>
## PrefetchQueryMixin Objects
```python
class PrefetchQueryMixin(RelationMixin)
```
Used in PrefetchQuery to extract ids and names of models to prefetch.
<a name="models.mixins.prefetch_mixin.PrefetchQueryMixin.get_clause_target_and_filter_column_name"></a>
#### get\_clause\_target\_and\_filter\_column\_name
```python
| @staticmethod
| get_clause_target_and_filter_column_name(parent_model: Type["Model"], target_model: Type["Model"], reverse: bool, related: str) -> Tuple[Type["Model"], str]
```
Returns Model on which query clause should be performed and name of the column.
**Arguments**:
- `parent_model` (`Type[Model]`): related model that the relation lead to
- `target_model` (`Type[Model]`): model on which query should be perfomed
- `reverse` (`bool`): flag if the relation is reverse
- `related` (`str`): name of the relation field
**Returns**:
`Tuple[Type[Model], str]`: Model on which query clause should be performed and name of the column
<a name="models.mixins.prefetch_mixin.PrefetchQueryMixin.get_column_name_for_id_extraction"></a>
#### get\_column\_name\_for\_id\_extraction
```python
| @staticmethod
| get_column_name_for_id_extraction(parent_model: Type["Model"], reverse: bool, related: str, use_raw: bool) -> str
```
Returns name of the column that should be used to extract ids from model.
Depending on the relation side it's either primary key column of parent model
or field name specified by related parameter.
**Arguments**:
- `parent_model` (`Type[Model]`): model from which id column should be extracted
- `reverse` (`bool`): flag if the relation is reverse
- `related` (`str`): name of the relation field
- `use_raw` (`bool`): flag if aliases or field names should be used
**Returns**:
<a name="models.mixins.prefetch_mixin.PrefetchQueryMixin.get_related_field_name"></a>
#### get\_related\_field\_name
```python
| @classmethod
| get_related_field_name(cls, target_field: "ForeignKeyField") -> str
```
Returns name of the relation field that should be used in prefetch query.
This field is later used to register relation in prefetch query,
populate relations dict, and populate nested model in prefetch query.
**Arguments**:
- `target_field` (`Type[BaseField]`): relation field that should be used in prefetch
**Returns**:
`str`: name of the field
<a name="models.mixins.prefetch_mixin.PrefetchQueryMixin.get_filtered_names_to_extract"></a>
#### get\_filtered\_names\_to\_extract
```python
| @classmethod
| get_filtered_names_to_extract(cls, prefetch_dict: Dict) -> List
```
Returns list of related fields names that should be followed to prefetch related
models from.
List of models is translated into dict to assure each model is extracted only
once in one query, that's why this function accepts prefetch_dict not list.
Only relations from current model are returned.
**Arguments**:
- `prefetch_dict` (`Dict`): dictionary of fields to extract
**Returns**:
`List`: list of fields names to extract

View File

@ -1,120 +0,0 @@
<a name="models.mixins.relation_mixin"></a>
# models.mixins.relation\_mixin
<a name="models.mixins.relation_mixin.RelationMixin"></a>
## RelationMixin Objects
```python
class RelationMixin()
```
Used to return relation fields/names etc. from given model
<a name="models.mixins.relation_mixin.RelationMixin.extract_db_own_fields"></a>
#### extract\_db\_own\_fields
```python
| @classmethod
| extract_db_own_fields(cls) -> Set
```
Returns only fields that are stored in the own database table, exclude all
related fields.
**Returns**:
`Set`: set of model fields with relation fields excluded
<a name="models.mixins.relation_mixin.RelationMixin.extract_related_fields"></a>
#### extract\_related\_fields
```python
| @classmethod
| extract_related_fields(cls) -> List["ForeignKeyField"]
```
Returns List of ormar Fields for all relations declared on a model.
List is cached in cls._related_fields for quicker access.
**Returns**:
`List`: list of related fields
<a name="models.mixins.relation_mixin.RelationMixin.extract_through_names"></a>
#### extract\_through\_names
```python
| @classmethod
| extract_through_names(cls) -> Set[str]
```
Extracts related fields through names which are shortcuts to through models.
**Returns**:
`Set`: set of related through fields names
<a name="models.mixins.relation_mixin.RelationMixin.extract_related_names"></a>
#### extract\_related\_names
```python
| @classmethod
| extract_related_names(cls) -> Set[str]
```
Returns List of fields names for all relations declared on a model.
List is cached in cls._related_names for quicker access.
**Returns**:
`Set`: set of related fields names
<a name="models.mixins.relation_mixin.RelationMixin._extract_db_related_names"></a>
#### \_extract\_db\_related\_names
```python
| @classmethod
| _extract_db_related_names(cls) -> Set
```
Returns only fields that are stored in the own database table, exclude
related fields that are not stored as foreign keys on given model.
**Returns**:
`Set`: set of model fields with non fk relation fields excluded
<a name="models.mixins.relation_mixin.RelationMixin._iterate_related_models"></a>
#### \_iterate\_related\_models
```python
| @classmethod
| _iterate_related_models(cls, node_list: NodeList = None, source_relation: str = None) -> List[str]
```
Iterates related models recursively to extract relation strings of
nested not visited models.
**Returns**:
`List[str]`: list of relation strings to be passed to select_related
<a name="models.mixins.relation_mixin.RelationMixin._get_final_relations"></a>
#### \_get\_final\_relations
```python
| @staticmethod
| _get_final_relations(processed_relations: List, source_relation: Optional[str]) -> List[str]
```
Helper method to prefix nested relation strings with current source relation
**Arguments**:
- `processed_relations` (`List[str]`): list of already processed relation str
- `source_relation` (`str`): name of the current relation
**Returns**:
`List[str]`: list of relation strings to be passed to select_related

View File

@ -1,238 +0,0 @@
<a name="models.mixins.save_mixin"></a>
# models.mixins.save\_mixin
<a name="models.mixins.save_mixin.SavePrepareMixin"></a>
## SavePrepareMixin Objects
```python
class SavePrepareMixin(RelationMixin, AliasMixin)
```
Used to prepare models to be saved in database
<a name="models.mixins.save_mixin.SavePrepareMixin.prepare_model_to_save"></a>
#### prepare\_model\_to\_save
```python
| @classmethod
| prepare_model_to_save(cls, new_kwargs: dict) -> dict
```
Combines all preparation methods before saving.
Removes primary key for if it's nullable or autoincrement pk field,
and it's set to None.
Substitute related models with their primary key values as fk column.
Populates the default values for field with default set and no value.
Translate columns into aliases (db names).
**Arguments**:
- `new_kwargs` (`Dict[str, str]`): dictionary of model that is about to be saved
**Returns**:
`Dict[str, str]`: dictionary of model that is about to be saved
<a name="models.mixins.save_mixin.SavePrepareMixin._remove_not_ormar_fields"></a>
#### \_remove\_not\_ormar\_fields
```python
| @classmethod
| _remove_not_ormar_fields(cls, new_kwargs: dict) -> dict
```
Removes primary key for if it's nullable or autoincrement pk field,
and it's set to None.
**Arguments**:
- `new_kwargs` (`Dict[str, str]`): dictionary of model that is about to be saved
**Returns**:
`Dict[str, str]`: dictionary of model that is about to be saved
<a name="models.mixins.save_mixin.SavePrepareMixin._remove_pk_from_kwargs"></a>
#### \_remove\_pk\_from\_kwargs
```python
| @classmethod
| _remove_pk_from_kwargs(cls, new_kwargs: dict) -> dict
```
Removes primary key for if it's nullable or autoincrement pk field,
and it's set to None.
**Arguments**:
- `new_kwargs` (`Dict[str, str]`): dictionary of model that is about to be saved
**Returns**:
`Dict[str, str]`: dictionary of model that is about to be saved
<a name="models.mixins.save_mixin.SavePrepareMixin.parse_non_db_fields"></a>
#### parse\_non\_db\_fields
```python
| @classmethod
| parse_non_db_fields(cls, model_dict: Dict) -> Dict
```
Receives dictionary of model that is about to be saved and changes uuid fields
to strings in bulk_update.
**Arguments**:
- `model_dict` (`Dict`): dictionary of model that is about to be saved
**Returns**:
`Dict`: dictionary of model that is about to be saved
<a name="models.mixins.save_mixin.SavePrepareMixin.substitute_models_with_pks"></a>
#### substitute\_models\_with\_pks
```python
| @classmethod
| substitute_models_with_pks(cls, model_dict: Dict) -> Dict
```
Receives dictionary of model that is about to be saved and changes all related
models that are stored as foreign keys to their fk value.
**Arguments**:
- `model_dict` (`Dict`): dictionary of model that is about to be saved
**Returns**:
`Dict`: dictionary of model that is about to be saved
<a name="models.mixins.save_mixin.SavePrepareMixin.populate_default_values"></a>
#### populate\_default\_values
```python
| @classmethod
| populate_default_values(cls, new_kwargs: Dict) -> Dict
```
Receives dictionary of model that is about to be saved and populates the default
value on the fields that have the default value set, but no actual value was
passed by the user.
**Arguments**:
- `new_kwargs` (`Dict`): dictionary of model that is about to be saved
**Returns**:
`Dict`: dictionary of model that is about to be saved
<a name="models.mixins.save_mixin.SavePrepareMixin.validate_choices"></a>
#### validate\_choices
```python
| @classmethod
| validate_choices(cls, new_kwargs: Dict) -> Dict
```
Receives dictionary of model that is about to be saved and validates the
fields with choices set to see if the value is allowed.
**Arguments**:
- `new_kwargs` (`Dict`): dictionary of model that is about to be saved
**Returns**:
`Dict`: dictionary of model that is about to be saved
<a name="models.mixins.save_mixin.SavePrepareMixin._upsert_model"></a>
#### \_upsert\_model
```python
| @staticmethod
| async _upsert_model(instance: "Model", save_all: bool, previous_model: Optional["Model"], relation_field: Optional["ForeignKeyField"], update_count: int) -> int
```
Method updates given instance if:
* instance is not saved or
* instance have no pk or
* save_all=True flag is set
and instance is not __pk_only__.
If relation leading to instance is a ManyToMany also the through model is saved
**Arguments**:
- `instance` (`Model`): current model to upsert
- `save_all` (`bool`): flag if all models should be saved or only not saved ones
- `relation_field` (`Optional[ForeignKeyField]`): field with relation
- `previous_model` (`Model`): previous model from which method came
- `update_count` (`int`): no of updated models
**Returns**:
`int`: no of updated models
<a name="models.mixins.save_mixin.SavePrepareMixin._upsert_through_model"></a>
#### \_upsert\_through\_model
```python
| @staticmethod
| async _upsert_through_model(instance: "Model", previous_model: "Model", relation_field: "ForeignKeyField") -> None
```
Upsert through model for m2m relation.
**Arguments**:
- `instance` (`Model`): current model to upsert
- `relation_field` (`Optional[ForeignKeyField]`): field with relation
- `previous_model` (`Model`): previous model from which method came
<a name="models.mixins.save_mixin.SavePrepareMixin._update_relation_list"></a>
#### \_update\_relation\_list
```python
| async _update_relation_list(fields_list: Collection["ForeignKeyField"], follow: bool, save_all: bool, relation_map: Dict, update_count: int) -> int
```
Internal method used in save_related to follow deeper from
related models and update numbers of updated related instances.
**Arguments**:
by default only directly related models are saved
with follow=True also related models of related models are saved
number of updated instances
- `save_all` (`bool`):
- `fields_list` (`Collection["ForeignKeyField"]`): list of ormar fields to follow and save
- `relation_map` (`Dict`): map of relations to follow
- `follow` (`bool`): flag to trigger deep save -
- `update_count` (`int`): internal parameter for recursive calls -
**Returns**:
`int`: tuple of update count and visited
<a name="models.mixins.save_mixin.SavePrepareMixin._get_field_values"></a>
#### \_get\_field\_values
```python
| _get_field_values(name: str) -> List
```
Extract field values and ensures it is a list.
**Arguments**:
- `name` (`str`): name of the field
**Returns**:
`List`: list of values

View File

@ -1,282 +0,0 @@
<a name="models.metaclass"></a>
# models.metaclass
<a name="models.metaclass.ModelMeta"></a>
## ModelMeta Objects
```python
class ModelMeta()
```
Class used for type hinting.
Users can subclass this one for convenience but it's not required.
The only requirement is that ormar.Model has to have inner class with name Meta.
<a name="models.metaclass.add_cached_properties"></a>
#### add\_cached\_properties
```python
add_cached_properties(new_model: Type["Model"]) -> None
```
Sets cached properties for both pydantic and ormar models.
Quick access fields are fields grabbed in getattribute to skip all checks.
Related fields and names are populated to None as they can change later.
When children models are constructed they can modify parent to register itself.
All properties here are used as "cache" to not recalculate them constantly.
**Arguments**:
- `new_model` (`Model class`): newly constructed Model
<a name="models.metaclass.add_property_fields"></a>
#### add\_property\_fields
```python
add_property_fields(new_model: Type["Model"], attrs: Dict) -> None
```
Checks class namespace for properties or functions with __property_field__.
If attribute have __property_field__ it was decorated with @property_field.
Functions like this are exposed in dict() (therefore also fastapi result).
Names of property fields are cached for quicker access / extraction.
**Arguments**:
- `new_model` (`Model class`): newly constructed model
- `attrs` (`Dict[str, str]`):
<a name="models.metaclass.register_signals"></a>
#### register\_signals
```python
register_signals(new_model: Type["Model"]) -> None
```
Registers on model's SignalEmmiter and sets pre defined signals.
Predefined signals are (pre/post) + (save/update/delete).
Signals are emitted in both model own methods and in selected queryset ones.
**Arguments**:
- `new_model` (`Model class`): newly constructed model
<a name="models.metaclass.verify_constraint_names"></a>
#### verify\_constraint\_names
```python
verify_constraint_names(base_class: "Model", model_fields: Dict, parent_value: List) -> None
```
Verifies if redefined fields that are overwritten in subclasses did not remove
any name of the column that is used in constraint as it will fail in sqlalchemy
Table creation.
**Arguments**:
- `base_class` (`Model or model parent class`): one of the parent classes
- `model_fields` (`Dict[str, BaseField]`): ormar fields in defined in current class
- `parent_value` (`List`): list of base class constraints
<a name="models.metaclass.update_attrs_from_base_meta"></a>
#### update\_attrs\_from\_base\_meta
```python
update_attrs_from_base_meta(base_class: "Model", attrs: Dict, model_fields: Dict) -> None
```
Updates Meta parameters in child from parent if needed.
**Arguments**:
- `base_class` (`Model or model parent class`): one of the parent classes
- `attrs` (`Dict`): new namespace for class being constructed
- `model_fields` (`Dict[str, BaseField]`): ormar fields in defined in current class
<a name="models.metaclass.copy_and_replace_m2m_through_model"></a>
#### copy\_and\_replace\_m2m\_through\_model
```python
copy_and_replace_m2m_through_model(field: ManyToManyField, field_name: str, table_name: str, parent_fields: Dict, attrs: Dict, meta: ModelMeta, base_class: Type["Model"]) -> None
```
Clones class with Through model for m2m relations, appends child name to the name
of the cloned class.
Clones non foreign keys fields from parent model, the same with database columns.
Modifies related_name with appending child table name after '_'
For table name, the table name of child is appended after '_'.
Removes the original sqlalchemy table from metadata if it was not removed.
**Arguments**:
- `base_class` (`Type["Model"]`): base class model
- `field` (`ManyToManyField`): field with relations definition
- `field_name` (`str`): name of the relation field
- `table_name` (`str`): name of the table
- `parent_fields` (`Dict`): dictionary of fields to copy to new models from parent
- `attrs` (`Dict`): new namespace for class being constructed
- `meta` (`ModelMeta`): metaclass of currently created model
<a name="models.metaclass.copy_data_from_parent_model"></a>
#### copy\_data\_from\_parent\_model
```python
copy_data_from_parent_model(base_class: Type["Model"], curr_class: type, attrs: Dict, model_fields: Dict[str, Union[BaseField, ForeignKeyField, ManyToManyField]]) -> Tuple[Dict, Dict]
```
Copy the key parameters [database, metadata, property_fields and constraints]
and fields from parent models. Overwrites them if needed.
Only abstract classes can be subclassed.
Since relation fields requires different related_name for different children
**Raises**:
- `ModelDefinitionError`: if non abstract model is subclassed
**Arguments**:
- `base_class` (`Model or model parent class`): one of the parent classes
- `curr_class` (`Model or model parent class`): current constructed class
- `attrs` (`Dict`): new namespace for class being constructed
- `model_fields` (`Dict[str, BaseField]`): ormar fields in defined in current class
**Returns**:
`Tuple[Dict, Dict]`: updated attrs and model_fields
<a name="models.metaclass.extract_from_parents_definition"></a>
#### extract\_from\_parents\_definition
```python
extract_from_parents_definition(base_class: type, curr_class: type, attrs: Dict, model_fields: Dict[str, Union[BaseField, ForeignKeyField, ManyToManyField]]) -> Tuple[Dict, Dict]
```
Extracts fields from base classes if they have valid ormar fields.
If model was already parsed -> fields definitions need to be removed from class
cause pydantic complains about field re-definition so after first child
we need to extract from __parsed_fields__ not the class itself.
If the class is parsed first time annotations and field definition is parsed
from the class.__dict__.
If the class is a ormar.Model it is skipped.
**Arguments**:
- `base_class` (`Model or model parent class`): one of the parent classes
- `curr_class` (`Model or model parent class`): current constructed class
- `attrs` (`Dict`): new namespace for class being constructed
- `model_fields` (`Dict[str, BaseField]`): ormar fields in defined in current class
**Returns**:
`Tuple[Dict, Dict]`: updated attrs and model_fields
<a name="models.metaclass.update_attrs_and_fields"></a>
#### update\_attrs\_and\_fields
```python
update_attrs_and_fields(attrs: Dict, new_attrs: Dict, model_fields: Dict, new_model_fields: Dict, new_fields: Set) -> Dict
```
Updates __annotations__, values of model fields (so pydantic FieldInfos)
as well as model.Meta.model_fields definitions from parents.
**Arguments**:
- `attrs` (`Dict`): new namespace for class being constructed
- `new_attrs` (`Dict`): related of the namespace extracted from parent class
- `model_fields` (`Dict[str, BaseField]`): ormar fields in defined in current class
- `new_model_fields` (`Dict[str, BaseField]`): ormar fields defined in parent classes
- `new_fields` (`Set[str]`): set of new fields names
<a name="models.metaclass.add_field_descriptor"></a>
#### add\_field\_descriptor
```python
add_field_descriptor(name: str, field: "BaseField", new_model: Type["Model"]) -> None
```
Sets appropriate descriptor for each model field.
There are 5 main types of descriptors, for bytes, json, pure pydantic fields,
and 2 ormar ones - one for relation and one for pk shortcut
**Arguments**:
- `name` (`str`): name of the field
- `field` (`BaseField`): model field to add descriptor for
- `new_model` (`Type["Model]`): model with fields
<a name="models.metaclass.ModelMetaclass"></a>
## ModelMetaclass Objects
```python
class ModelMetaclass(pydantic.main.ModelMetaclass)
```
<a name="models.metaclass.ModelMetaclass.__new__"></a>
#### \_\_new\_\_
```python
| __new__(mcs: "ModelMetaclass", name: str, bases: Any, attrs: dict) -> "ModelMetaclass"
```
Metaclass used by ormar Models that performs configuration
and build of ormar Models.
Sets pydantic configuration.
Extract model_fields and convert them to pydantic FieldInfo,
updates class namespace.
Extracts settings and fields from parent classes.
Fetches methods decorated with @property_field decorator
to expose them later in dict().
Construct parent pydantic Metaclass/ Model.
If class has Meta class declared (so actual ormar Models) it also:
* populate sqlalchemy columns, pkname and tables from model_fields
* register reverse relationships on related models
* registers all relations in alias manager that populates table_prefixes
* exposes alias manager on each Model
* creates QuerySet for each model and exposes it on a class
**Arguments**:
- `name` (`str`): name of current class
- `bases` (`Tuple`): base classes
- `attrs` (`Dict`): class namespace
<a name="models.metaclass.ModelMetaclass.__getattr__"></a>
#### \_\_getattr\_\_
```python
| __getattr__(item: str) -> Any
```
Returns FieldAccessors on access to model fields from a class,
that way it can be used in python style filters and order_by.
**Arguments**:
- `item` (`str`): name of the field
**Returns**:
`FieldAccessor`: FieldAccessor for given field

View File

@ -1,186 +0,0 @@
<a name="models.model_row"></a>
# models.model\_row
<a name="models.model_row.ModelRow"></a>
## ModelRow Objects
```python
class ModelRow(NewBaseModel)
```
<a name="models.model_row.ModelRow.from_row"></a>
#### from\_row
```python
| @classmethod
| from_row(cls, row: sqlalchemy.engine.ResultProxy, source_model: Type["Model"], select_related: List = None, related_models: Any = None, related_field: "ForeignKeyField" = None, excludable: ExcludableItems = None, current_relation_str: str = "", proxy_source_model: Optional[Type["Model"]] = None, used_prefixes: List[str] = None) -> Optional["Model"]
```
Model method to convert raw sql row from database into ormar.Model instance.
Traverses nested models if they were specified in select_related for query.
Called recurrently and returns model instance if it's present in the row.
Note that it's processing one row at a time, so if there are duplicates of
parent row that needs to be joined/combined
(like parent row in sql join with 2+ child rows)
instances populated in this method are later combined in the QuerySet.
Other method working directly on raw database results is in prefetch_query,
where rows are populated in a different way as they do not have
nested models in result.
**Arguments**:
- `used_prefixes` (`List[str]`): list of already extracted prefixes
- `proxy_source_model` (`Optional[Type["ModelRow"]]`): source model from which querysetproxy is constructed
- `excludable` (`ExcludableItems`): structure of fields to include and exclude
- `current_relation_str` (`str`): name of the relation field
- `source_model` (`Type[Model]`): model on which relation was defined
- `row` (`sqlalchemy.engine.result.ResultProxy`): raw result row from the database
- `select_related` (`List`): list of names of related models fetched from database
- `related_models` (`Union[List, Dict]`): list or dict of related models
- `related_field` (`ForeignKeyField`): field with relation declaration
**Returns**:
`Optional[Model]`: returns model if model is populated from database
<a name="models.model_row.ModelRow._process_table_prefix"></a>
#### \_process\_table\_prefix
```python
| @classmethod
| _process_table_prefix(cls, source_model: Type["Model"], current_relation_str: str, related_field: "ForeignKeyField", used_prefixes: List[str]) -> str
```
**Arguments**:
- `source_model` (`Type[Model]`): model on which relation was defined
- `current_relation_str` (`str`): current relation string
- `related_field` (`"ForeignKeyField"`): field with relation declaration
- `used_prefixes` (`List[str]`): list of already extracted prefixes
**Returns**:
`str`: table_prefix to use
<a name="models.model_row.ModelRow._populate_nested_models_from_row"></a>
#### \_populate\_nested\_models\_from\_row
```python
| @classmethod
| _populate_nested_models_from_row(cls, item: dict, row: sqlalchemy.engine.ResultProxy, source_model: Type["Model"], related_models: Any, excludable: ExcludableItems, table_prefix: str, used_prefixes: List[str], current_relation_str: str = None, proxy_source_model: Type["Model"] = None) -> dict
```
Traverses structure of related models and populates the nested models
from the database row.
Related models can be a list if only directly related models are to be
populated, converted to dict if related models also have their own related
models to be populated.
Recurrently calls from_row method on nested instances and create nested
instances. In the end those instances are added to the final model dictionary.
**Arguments**:
- `proxy_source_model` (`Optional[Type["ModelRow"]]`): source model from which querysetproxy is constructed
- `excludable` (`ExcludableItems`): structure of fields to include and exclude
- `source_model` (`Type[Model]`): source model from which relation started
- `current_relation_str` (`str`): joined related parts into one string
- `item` (`Dict`): dictionary of already populated nested models, otherwise empty dict
- `row` (`sqlalchemy.engine.result.ResultProxy`): raw result row from the database
- `related_models` (`Union[Dict, List]`): list or dict of related models
**Returns**:
`Dict`: dictionary with keys corresponding to model fields names
<a name="models.model_row.ModelRow._process_remainder_and_relation_string"></a>
#### \_process\_remainder\_and\_relation\_string
```python
| @staticmethod
| _process_remainder_and_relation_string(related_models: Union[Dict, List], current_relation_str: Optional[str], related: str) -> Tuple[str, Optional[Union[Dict, List]]]
```
Process remainder models and relation string
**Arguments**:
- `related_models` (`Union[Dict, List]`): list or dict of related models
- `current_relation_str` (`Optional[str]`): current relation string
- `related` (`str`): name of the relation
<a name="models.model_row.ModelRow._populate_through_instance"></a>
#### \_populate\_through\_instance
```python
| @classmethod
| _populate_through_instance(cls, row: sqlalchemy.engine.ResultProxy, item: Dict, related: str, excludable: ExcludableItems, child: "Model", proxy_source_model: Optional[Type["Model"]]) -> None
```
Populates the through model on reverse side of current query.
Normally it's child class, unless the query is from queryset.
**Arguments**:
- `row` (`sqlalchemy.engine.ResultProxy`): row from db result
- `item` (`Dict`): parent item dict
- `related` (`str`): current relation name
- `excludable` (`ExcludableItems`): structure of fields to include and exclude
- `child` (`"Model"`): child item of parent
- `proxy_source_model` (`Type["Model"]`): source model from which querysetproxy is constructed
<a name="models.model_row.ModelRow._create_through_instance"></a>
#### \_create\_through\_instance
```python
| @classmethod
| _create_through_instance(cls, row: sqlalchemy.engine.ResultProxy, through_name: str, related: str, excludable: ExcludableItems) -> "ModelRow"
```
Initialize the through model from db row.
Excluded all relation fields and other exclude/include set in excludable.
**Arguments**:
- `row` (`sqlalchemy.engine.ResultProxy`): loaded row from database
- `through_name` (`str`): name of the through field
- `related` (`str`): name of the relation
- `excludable` (`ExcludableItems`): structure of fields to include and exclude
**Returns**:
`"ModelRow"`: initialized through model without relation
<a name="models.model_row.ModelRow.extract_prefixed_table_columns"></a>
#### extract\_prefixed\_table\_columns
```python
| @classmethod
| extract_prefixed_table_columns(cls, item: dict, row: sqlalchemy.engine.result.ResultProxy, table_prefix: str, excludable: ExcludableItems) -> Dict
```
Extracts own fields from raw sql result, using a given prefix.
Prefix changes depending on the table's position in a join.
If the table is a main table, there is no prefix.
All joined tables have prefixes to allow duplicate column names,
as well as duplicated joins to the same table from multiple different tables.
Extracted fields populates the related dict later used to construct a Model.
Used in Model.from_row and PrefetchQuery._populate_rows methods.
**Arguments**:
each pair of tables have own prefix (two of them depending on direction) -
used in joins to allow multiple joins to the same table.
- `excludable` (`ExcludableItems`): structure of fields to include and exclude
- `item` (`Dict`): dictionary of already populated nested models, otherwise empty dict
- `row` (`sqlalchemy.engine.result.ResultProxy`): raw result row from the database
- `table_prefix` (`str`): prefix of the table from AliasManager
**Returns**:
`Dict`: dictionary with keys corresponding to model fields names

View File

@ -1,18 +0,0 @@
<a name="models.modelproxy"></a>
# models.modelproxy
<a name="models.modelproxy.ModelTableProxy"></a>
## ModelTableProxy Objects
```python
class ModelTableProxy(
PrefetchQueryMixin,
MergeModelMixin,
SavePrepareMixin,
ExcludableMixin,
PydanticMixin)
```
Used to combine all mixins with different set of functionalities.
One of the bases of the ormar Model class.

View File

@ -1,201 +0,0 @@
<a name="models.model"></a>
# models.model
<a name="models.model.Model"></a>
## Model Objects
```python
class Model(ModelRow)
```
<a name="models.model.Model.upsert"></a>
#### upsert
```python
| async upsert(**kwargs: Any) -> T
```
Performs either a save or an update depending on the presence of the pk.
If the pk field is filled it's an update, otherwise the save is performed.
For save kwargs are ignored, used only in update if provided.
**Arguments**:
- `kwargs` (`Any`): list of fields to update
**Returns**:
`Model`: saved Model
<a name="models.model.Model.save"></a>
#### save
```python
| async save() -> T
```
Performs a save of given Model instance.
If primary key is already saved, db backend will throw integrity error.
Related models are saved by pk number, reverse relation and many to many fields
are not saved - use corresponding relations methods.
If there are fields with server_default set and those fields
are not already filled save will trigger also a second query
to refreshed the fields populated server side.
Does not recognize if model was previously saved.
If you want to perform update or insert depending on the pk
fields presence use upsert.
Sends pre_save and post_save signals.
Sets model save status to True.
**Returns**:
`Model`: saved Model
<a name="models.model.Model.save_related"></a>
#### save\_related
```python
| async save_related(follow: bool = False, save_all: bool = False, relation_map: Dict = None, exclude: Union[Set, Dict] = None, update_count: int = 0, previous_model: "Model" = None, relation_field: Optional["ForeignKeyField"] = None) -> int
```
Triggers a upsert method on all related models
if the instances are not already saved.
By default saves only the directly related ones.
If follow=True is set it saves also related models of related models.
To not get stuck in an infinite loop as related models also keep a relation
to parent model visited models set is kept.
That way already visited models that are nested are saved, but the save do not
follow them inside. So Model A -> Model B -> Model A -> Model C will save second
Model A but will never follow into Model C.
Nested relations of those kind need to be persisted manually.
**Arguments**:
by default only directly related models are saved
with follow=True also related models of related models are saved
number of updated instances
- `relation_field` (`Optional[ForeignKeyField]`): field with relation leading to this model
- `previous_model` (`Model`): previous model from which method came
- `exclude` (`Union[Set, Dict]`): items to exclude during saving of relations
- `relation_map` (`Dict`): map of relations to follow
- `save_all` (`bool`): flag if all models should be saved or only not saved ones
- `follow` (`bool`): flag to trigger deep save -
- `update_count` (`int`): internal parameter for recursive calls -
**Returns**:
`int`: number of updated/saved models
<a name="models.model.Model.update"></a>
#### update
```python
| async update(_columns: List[str] = None, **kwargs: Any) -> T
```
Performs update of Model instance in the database.
Fields can be updated before or you can pass them as kwargs.
Sends pre_update and post_update signals.
Sets model save status to True.
**Arguments**:
- `_columns` (`List`): list of columns to update, if None all are updated
- `kwargs` (`Any`): list of fields to update as field=value pairs
**Raises**:
- `ModelPersistenceError`: If the pk column is not set
**Returns**:
`Model`: updated Model
<a name="models.model.Model.delete"></a>
#### delete
```python
| async delete() -> int
```
Removes the Model instance from the database.
Sends pre_delete and post_delete signals.
Sets model save status to False.
Note it does not delete the Model itself (python object).
So you can delete and later save (since pk is deleted no conflict will arise)
or update and the Model will be saved in database again.
**Returns**:
`int`: number of deleted rows (for some backends)
<a name="models.model.Model.load"></a>
#### load
```python
| async load() -> T
```
Allow to refresh existing Models fields from database.
Be careful as the related models can be overwritten by pk_only models in load.
Does NOT refresh the related models fields if they were loaded before.
**Raises**:
- `NoMatch`: If given pk is not found in database.
**Returns**:
`Model`: reloaded Model
<a name="models.model.Model.load_all"></a>
#### load\_all
```python
| async load_all(follow: bool = False, exclude: Union[List, str, Set, Dict] = None, order_by: Union[List, str] = None) -> T
```
Allow to refresh existing Models fields from database.
Performs refresh of the related models fields.
By default loads only self and the directly related ones.
If follow=True is set it loads also related models of related models.
To not get stuck in an infinite loop as related models also keep a relation
to parent model visited models set is kept.
That way already visited models that are nested are loaded, but the load do not
follow them inside. So Model A -> Model B -> Model C -> Model A -> Model X
will load second Model A but will never follow into Model X.
Nested relations of those kind need to be loaded manually.
**Arguments**:
by default only directly related models are saved
with follow=True also related models of related models are saved
- `order_by` (`Union[List, str]`): columns by which models should be sorted
- `exclude` (`Union[List, str, Set, Dict]`): related models to exclude
- `follow` (`bool`): flag to trigger deep save -
**Raises**:
- `NoMatch`: If given pk is not found in database.
**Returns**:
`Model`: reloaded Model

View File

@ -1,600 +0,0 @@
<a name="models.newbasemodel"></a>
# models.newbasemodel
<a name="models.newbasemodel.NewBaseModel"></a>
## NewBaseModel Objects
```python
class NewBaseModel(pydantic.BaseModel, ModelTableProxy, metaclass=ModelMetaclass)
```
Main base class of ormar Model.
Inherits from pydantic BaseModel and has all mixins combined in ModelTableProxy.
Constructed with ModelMetaclass which in turn also inherits pydantic metaclass.
Abstracts away all internals and helper functions, so final Model class has only
the logic concerned with database connection and data persistance.
<a name="models.newbasemodel.NewBaseModel.__init__"></a>
#### \_\_init\_\_
```python
| __init__(*args: Any, **kwargs: Any) -> None
```
Initializer that creates a new ormar Model that is also pydantic Model at the
same time.
Passed keyword arguments can be only field names and their corresponding values
as those will be passed to pydantic validation that will complain if extra
params are passed.
If relations are defined each relation is expanded and children models are also
initialized and validated. Relation from both sides is registered so you can
access related models from both sides.
Json fields are automatically loaded/dumped if needed.
Models marked as abstract=True in internal Meta class cannot be initialized.
Accepts also special __pk_only__ flag that indicates that Model is constructed
only with primary key value (so no other fields, it's a child model on other
Model), that causes skipping the validation, that's the only case when the
validation can be skipped.
Accepts also special __excluded__ parameter that contains a set of fields that
should be explicitly set to None, as otherwise pydantic will try to populate
them with their default values if default is set.
**Raises**:
- `ModelError`: if abstract model is initialized, model has ForwardRefs
that has not been updated or unknown field is passed
**Arguments**:
- `args` (`Any`): ignored args
- `kwargs` (`Any`): keyword arguments - all fields values and some special params
<a name="models.newbasemodel.NewBaseModel.__setattr__"></a>
#### \_\_setattr\_\_
```python
| __setattr__(name: str, value: Any) -> None
```
Overwrites setattr in pydantic parent as otherwise descriptors are not called.
**Arguments**:
- `name` (`str`): name of the attribute to set
- `value` (`Any`): value of the attribute to set
**Returns**:
`None`: None
<a name="models.newbasemodel.NewBaseModel.__getattr__"></a>
#### \_\_getattr\_\_
```python
| __getattr__(item: str) -> Any
```
Used only to silence mypy errors for Through models and reverse relations.
Not used in real life as in practice calls are intercepted
by RelationDescriptors
**Arguments**:
- `item` (`str`): name of attribute
**Returns**:
`Any`: Any
<a name="models.newbasemodel.NewBaseModel._internal_set"></a>
#### \_internal\_set
```python
| _internal_set(name: str, value: Any) -> None
```
Delegates call to pydantic.
**Arguments**:
- `name` (`str`): name of param
- `value` (`Any`): value to set
<a name="models.newbasemodel.NewBaseModel._verify_model_can_be_initialized"></a>
#### \_verify\_model\_can\_be\_initialized
```python
| _verify_model_can_be_initialized() -> None
```
Raises exception if model is abstract or has ForwardRefs in relation fields.
**Returns**:
`None`: None
<a name="models.newbasemodel.NewBaseModel._process_kwargs"></a>
#### \_process\_kwargs
```python
| _process_kwargs(kwargs: Dict) -> Tuple[Dict, Dict]
```
Initializes nested models.
Removes property_fields
Checks if field is in the model fields or pydatnic fields.
Nullifies fields that should be excluded.
Extracts through models from kwargs into temporary dict.
**Arguments**:
- `kwargs` (`Dict`): passed to init keyword arguments
**Returns**:
`Tuple[Dict, Dict]`: modified kwargs
<a name="models.newbasemodel.NewBaseModel._initialize_internal_attributes"></a>
#### \_initialize\_internal\_attributes
```python
| _initialize_internal_attributes() -> None
```
Initializes internal attributes during __init__()
**Returns**:
`None`:
<a name="models.newbasemodel.NewBaseModel.__eq__"></a>
#### \_\_eq\_\_
```python
| __eq__(other: object) -> bool
```
Compares other model to this model. when == is called.
**Arguments**:
- `other` (`object`): other model to compare
**Returns**:
`bool`: result of comparison
<a name="models.newbasemodel.NewBaseModel.__same__"></a>
#### \_\_same\_\_
```python
| __same__(other: "NewBaseModel") -> bool
```
Used by __eq__, compares other model to this model.
Compares:
* _orm_ids,
* primary key values if it's set
* dictionary of own fields (excluding relations)
**Arguments**:
- `other` (`NewBaseModel`): model to compare to
**Returns**:
`bool`: result of comparison
<a name="models.newbasemodel.NewBaseModel.get_name"></a>
#### get\_name
```python
| @classmethod
| get_name(cls, lower: bool = True) -> str
```
Returns name of the Model class, by default lowercase.
**Arguments**:
- `lower` (`bool`): flag if name should be set to lowercase
**Returns**:
`str`: name of the model
<a name="models.newbasemodel.NewBaseModel.pk_column"></a>
#### pk\_column
```python
| @property
| pk_column() -> sqlalchemy.Column
```
Retrieves primary key sqlalchemy column from models Meta.table.
Each model has to have primary key.
Only one primary key column is allowed.
**Returns**:
`sqlalchemy.Column`: primary key sqlalchemy column
<a name="models.newbasemodel.NewBaseModel.saved"></a>
#### saved
```python
| @property
| saved() -> bool
```
Saved status of the model. Changed by setattr and loading from db
<a name="models.newbasemodel.NewBaseModel.signals"></a>
#### signals
```python
| @property
| signals() -> "SignalEmitter"
```
Exposes signals from model Meta
<a name="models.newbasemodel.NewBaseModel.pk_type"></a>
#### pk\_type
```python
| @classmethod
| pk_type(cls) -> Any
```
Shortcut to models primary key field type
<a name="models.newbasemodel.NewBaseModel.db_backend_name"></a>
#### db\_backend\_name
```python
| @classmethod
| db_backend_name(cls) -> str
```
Shortcut to database dialect,
cause some dialect require different treatment
<a name="models.newbasemodel.NewBaseModel.remove"></a>
#### remove
```python
| remove(parent: "Model", name: str) -> None
```
Removes child from relation with given name in RelationshipManager
<a name="models.newbasemodel.NewBaseModel.set_save_status"></a>
#### set\_save\_status
```python
| set_save_status(status: bool) -> None
```
Sets value of the save status
<a name="models.newbasemodel.NewBaseModel.get_properties"></a>
#### get\_properties
```python
| @classmethod
| get_properties(cls, include: Union[Set, Dict, None], exclude: Union[Set, Dict, None]) -> Set[str]
```
Returns a set of names of functions/fields decorated with
@property_field decorator.
They are added to dictionary when called directly and therefore also are
present in fastapi responses.
**Arguments**:
- `include` (`Union[Set, Dict, None]`): fields to include
- `exclude` (`Union[Set, Dict, None]`): fields to exclude
**Returns**:
`Set[str]`: set of property fields names
<a name="models.newbasemodel.NewBaseModel.update_forward_refs"></a>
#### update\_forward\_refs
```python
| @classmethod
| update_forward_refs(cls, **localns: Any) -> None
```
Processes fields that are ForwardRef and need to be evaluated into actual
models.
Expands relationships, register relation in alias manager and substitutes
sqlalchemy columns with new ones with proper column type (null before).
Populates Meta table of the Model which is left empty before.
Sets self_reference flag on models that links to themselves.
Calls the pydantic method to evaluate pydantic fields.
**Arguments**:
- `localns` (`Any`): local namespace
**Returns**:
`None`: None
<a name="models.newbasemodel.NewBaseModel._get_not_excluded_fields"></a>
#### \_get\_not\_excluded\_fields
```python
| @staticmethod
| _get_not_excluded_fields(fields: Union[List, Set], include: Optional[Dict], exclude: Optional[Dict]) -> List
```
Returns related field names applying on them include and exclude set.
**Arguments**:
- `include` (`Union[Set, Dict, None]`): fields to include
- `exclude` (`Union[Set, Dict, None]`): fields to exclude
**Returns**:
`List of fields with relations that is not excluded`:
<a name="models.newbasemodel.NewBaseModel._extract_nested_models_from_list"></a>
#### \_extract\_nested\_models\_from\_list
```python
| @staticmethod
| _extract_nested_models_from_list(relation_map: Dict, models: MutableSequence, include: Union[Set, Dict, None], exclude: Union[Set, Dict, None], exclude_primary_keys: bool, exclude_through_models: bool) -> List
```
Converts list of models into list of dictionaries.
**Arguments**:
- `models` (`List`): List of models
- `include` (`Union[Set, Dict, None]`): fields to include
- `exclude` (`Union[Set, Dict, None]`): fields to exclude
**Returns**:
`List[Dict]`: list of models converted to dictionaries
<a name="models.newbasemodel.NewBaseModel._skip_ellipsis"></a>
#### \_skip\_ellipsis
```python
| @classmethod
| _skip_ellipsis(cls, items: Union[Set, Dict, None], key: str, default_return: Any = None) -> Union[Set, Dict, None]
```
Helper to traverse the include/exclude dictionaries.
In dict() Ellipsis should be skipped as it indicates all fields required
and not the actual set/dict with fields names.
**Arguments**:
- `items` (`Union[Set, Dict, None]`): current include/exclude value
- `key` (`str`): key for nested relations to check
**Returns**:
`Union[Set, Dict, None]`: nested value of the items
<a name="models.newbasemodel.NewBaseModel._convert_all"></a>
#### \_convert\_all
```python
| @staticmethod
| _convert_all(items: Union[Set, Dict, None]) -> Union[Set, Dict, None]
```
Helper to convert __all__ pydantic special index to ormar which does not
support index based exclusions.
**Arguments**:
- `items` (`Union[Set, Dict, None]`): current include/exclude value
<a name="models.newbasemodel.NewBaseModel._extract_nested_models"></a>
#### \_extract\_nested\_models
```python
| _extract_nested_models(relation_map: Dict, dict_instance: Dict, include: Optional[Dict], exclude: Optional[Dict], exclude_primary_keys: bool, exclude_through_models: bool) -> Dict
```
Traverse nested models and converts them into dictionaries.
Calls itself recursively if needed.
**Arguments**:
- `nested` (`bool`): flag if current instance is nested
- `dict_instance` (`Dict`): current instance dict
- `include` (`Optional[Dict]`): fields to include
- `exclude` (`Optional[Dict]`): fields to exclude
**Returns**:
`Dict`: current model dict with child models converted to dictionaries
<a name="models.newbasemodel.NewBaseModel.dict"></a>
#### dict
```python
| dict(*, include: Union[Set, Dict] = None, exclude: Union[Set, Dict] = None, by_alias: bool = False, skip_defaults: bool = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, exclude_primary_keys: bool = False, exclude_through_models: bool = False, relation_map: Dict = None) -> "DictStrAny"
```
Generate a dictionary representation of the model,
optionally specifying which fields to include or exclude.
Nested models are also parsed to dictionaries.
Additionally fields decorated with @property_field are also added.
**Arguments**:
- `exclude_through_models` (`bool`): flag to exclude through models from dict
- `exclude_primary_keys` (`bool`): flag to exclude primary keys from dict
- `include` (`Union[Set, Dict, None]`): fields to include
- `exclude` (`Union[Set, Dict, None]`): fields to exclude
- `by_alias` (`bool`): flag to get values by alias - passed to pydantic
- `skip_defaults` (`bool`): flag to not set values - passed to pydantic
- `exclude_unset` (`bool`): flag to exclude not set values - passed to pydantic
- `exclude_defaults` (`bool`): flag to exclude default values - passed to pydantic
- `exclude_none` (`bool`): flag to exclude None values - passed to pydantic
- `relation_map` (`Dict`): map of the relations to follow to avoid circural deps
**Returns**:
<a name="models.newbasemodel.NewBaseModel.json"></a>
#### json
```python
| json(*, include: Union[Set, Dict] = None, exclude: Union[Set, Dict] = None, by_alias: bool = False, skip_defaults: bool = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, exclude_primary_keys: bool = False, exclude_through_models: bool = False, **dumps_kwargs: Any, ,) -> str
```
Generate a JSON representation of the model, `include` and `exclude`
arguments as per `dict()`.
`encoder` is an optional function to supply as `default` to json.dumps(),
other arguments as per `json.dumps()`.
<a name="models.newbasemodel.NewBaseModel.update_from_dict"></a>
#### update\_from\_dict
```python
| update_from_dict(value_dict: Dict) -> "NewBaseModel"
```
Updates self with values of fields passed in the dictionary.
**Arguments**:
- `value_dict` (`Dict`): dictionary of fields names and values
**Returns**:
`NewBaseModel`: self
<a name="models.newbasemodel.NewBaseModel._convert_to_bytes"></a>
#### \_convert\_to\_bytes
```python
| _convert_to_bytes(column_name: str, value: Any) -> Union[str, Dict]
```
Converts value to bytes from string
**Arguments**:
- `column_name` (`str`): name of the field
- `value` (`Any`): value fo the field
**Returns**:
`Any`: converted value if needed, else original value
<a name="models.newbasemodel.NewBaseModel._convert_bytes_to_str"></a>
#### \_convert\_bytes\_to\_str
```python
| _convert_bytes_to_str(column_name: str, value: Any) -> Union[str, Dict]
```
Converts value to str from bytes for represent_as_base64_str columns.
**Arguments**:
- `column_name` (`str`): name of the field
- `value` (`Any`): value fo the field
**Returns**:
`Any`: converted value if needed, else original value
<a name="models.newbasemodel.NewBaseModel._convert_json"></a>
#### \_convert\_json
```python
| _convert_json(column_name: str, value: Any) -> Union[str, Dict]
```
Converts value to/from json if needed (for Json columns).
**Arguments**:
- `column_name` (`str`): name of the field
- `value` (`Any`): value fo the field
**Returns**:
`Any`: converted value if needed, else original value
<a name="models.newbasemodel.NewBaseModel._extract_own_model_fields"></a>
#### \_extract\_own\_model\_fields
```python
| _extract_own_model_fields() -> Dict
```
Returns a dictionary with field names and values for fields that are not
relations fields (ForeignKey, ManyToMany etc.)
**Returns**:
`Dict`: dictionary of fields names and values.
<a name="models.newbasemodel.NewBaseModel._extract_model_db_fields"></a>
#### \_extract\_model\_db\_fields
```python
| _extract_model_db_fields() -> Dict
```
Returns a dictionary with field names and values for fields that are stored in
current model's table.
That includes own non-relational fields ang foreign key fields.
**Returns**:
`Dict`: dictionary of fields names and values.
<a name="models.newbasemodel.NewBaseModel.get_relation_model_id"></a>
#### get\_relation\_model\_id
```python
| get_relation_model_id(target_field: "BaseField") -> Optional[int]
```
Returns an id of the relation side model to use in prefetch query.
**Arguments**:
- `target_field` (`"BaseField"`): field with relation definition
**Returns**:
`Optional[int]`: value of pk if set

View File

@ -1,78 +0,0 @@
<a name="models.traversible"></a>
# models.traversible
<a name="models.traversible.NodeList"></a>
## NodeList Objects
```python
class NodeList()
```
Helper class that helps with iterating nested models
<a name="models.traversible.NodeList.add"></a>
#### add
```python
| add(node_class: Type["RelationMixin"], relation_name: str = None, parent_node: "Node" = None) -> "Node"
```
Adds new Node or returns the existing one
**Arguments**:
- `node_class` (`ormar.models.metaclass.ModelMetaclass`): Model in current node
- `relation_name` (`str`): name of the current relation
- `parent_node` (`Optional[Node]`): parent node
**Returns**:
`Node`: returns new or already existing node
<a name="models.traversible.NodeList.find"></a>
#### find
```python
| find(node_class: Type["RelationMixin"], relation_name: Optional[str] = None, parent_node: "Node" = None) -> Optional["Node"]
```
Searches for existing node with given parameters
**Arguments**:
- `node_class` (`ormar.models.metaclass.ModelMetaclass`): Model in current node
- `relation_name` (`str`): name of the current relation
- `parent_node` (`Optional[Node]`): parent node
**Returns**:
`Optional[Node]`: returns already existing node or None
<a name="models.traversible.Node"></a>
## Node Objects
```python
class Node()
```
<a name="models.traversible.Node.visited"></a>
#### visited
```python
| visited(relation_name: str) -> bool
```
Checks if given relation was already visited.
Relation was visited if it's name is in current node children.
Relation was visited if one of the parent node had the same Model class
**Arguments**:
- `relation_name` (`str`): name of relation
**Returns**:
`bool`: result of the check

View File

@ -1,229 +0,0 @@
<a name="queryset.clause"></a>
# queryset.clause
<a name="queryset.clause.FilterGroup"></a>
## FilterGroup Objects
```python
class FilterGroup()
```
Filter groups are used in complex queries condition to group and and or
clauses in where condition
<a name="queryset.clause.FilterGroup.resolve"></a>
#### resolve
```python
| resolve(model_cls: Type["Model"], select_related: List = None, filter_clauses: List = None) -> Tuple[List[FilterAction], List[str]]
```
Resolves the FilterGroups actions to use proper target model, replace
complex relation prefixes if needed and nested groups also resolved.
**Arguments**:
- `model_cls` (`Type["Model"]`): model from which the query is run
- `select_related` (`List[str]`): list of models to join
- `filter_clauses` (`List[FilterAction]`): list of filter conditions
**Returns**:
`Tuple[List[FilterAction], List[str]]`: list of filter conditions and select_related list
<a name="queryset.clause.FilterGroup._iter"></a>
#### \_iter
```python
| _iter() -> Generator
```
Iterates all actions in a tree
**Returns**:
`Generator`: generator yielding from own actions and nested groups
<a name="queryset.clause.FilterGroup._get_text_clauses"></a>
#### \_get\_text\_clauses
```python
| _get_text_clauses() -> List[sqlalchemy.sql.expression.TextClause]
```
Helper to return list of text queries from actions and nested groups
**Returns**:
`List[sqlalchemy.sql.elements.TextClause]`: list of text queries from actions and nested groups
<a name="queryset.clause.FilterGroup.get_text_clause"></a>
#### get\_text\_clause
```python
| get_text_clause() -> sqlalchemy.sql.expression.TextClause
```
Returns all own actions and nested groups conditions compiled and joined
inside parentheses.
Escapes characters if it's required.
Substitutes values of the models if value is a ormar Model with its pk value.
Compiles the clause.
**Returns**:
`sqlalchemy.sql.elements.TextClause`: complied and escaped clause
<a name="queryset.clause.or_"></a>
#### or\_
```python
or_(*args: FilterGroup, **kwargs: Any) -> FilterGroup
```
Construct or filter from nested groups and keyword arguments
**Arguments**:
- `args` (`Tuple[FilterGroup]`): nested filter groups
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup ready to be resolved
<a name="queryset.clause.and_"></a>
#### and\_
```python
and_(*args: FilterGroup, **kwargs: Any) -> FilterGroup
```
Construct and filter from nested groups and keyword arguments
**Arguments**:
- `args` (`Tuple[FilterGroup]`): nested filter groups
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup ready to be resolved
<a name="queryset.clause.QueryClause"></a>
## QueryClause Objects
```python
class QueryClause()
```
Constructs FilterActions from strings passed as arguments
<a name="queryset.clause.QueryClause.prepare_filter"></a>
#### prepare\_filter
```python
| prepare_filter(_own_only: bool = False, **kwargs: Any) -> Tuple[List[FilterAction], List[str]]
```
Main external access point that processes the clauses into sqlalchemy text
clauses and updates select_related list with implicit related tables
mentioned in select_related strings but not included in select_related.
**Arguments**:
- `_own_only`:
- `kwargs` (`Any`): key, value pair with column names and values
**Returns**:
`Tuple[List[sqlalchemy.sql.elements.TextClause], List[str]]`: Tuple with list of where clauses and updated select_related list
<a name="queryset.clause.QueryClause._populate_filter_clauses"></a>
#### \_populate\_filter\_clauses
```python
| _populate_filter_clauses(_own_only: bool, **kwargs: Any) -> Tuple[List[FilterAction], List[str]]
```
Iterates all clauses and extracts used operator and field from related
models if needed. Based on the chain of related names the target table
is determined and the final clause is escaped if needed and compiled.
**Arguments**:
- `kwargs` (`Any`): key, value pair with column names and values
**Returns**:
`Tuple[List[sqlalchemy.sql.elements.TextClause], List[str]]`: Tuple with list of where clauses and updated select_related list
<a name="queryset.clause.QueryClause._register_complex_duplicates"></a>
#### \_register\_complex\_duplicates
```python
| _register_complex_duplicates(select_related: List[str]) -> None
```
Checks if duplicate aliases are presented which can happen in self relation
or when two joins end with the same pair of models.
If there are duplicates, the all duplicated joins are registered as source
model and whole relation key (not just last relation name).
**Arguments**:
- `select_related` (`List[str]`): list of relation strings
**Returns**:
`None`: None
<a name="queryset.clause.QueryClause._parse_related_prefixes"></a>
#### \_parse\_related\_prefixes
```python
| _parse_related_prefixes(select_related: List[str]) -> List[Prefix]
```
Walks all relation strings and parses the target models and prefixes.
**Arguments**:
- `select_related` (`List[str]`): list of relation strings
**Returns**:
`List[Prefix]`: list of parsed prefixes
<a name="queryset.clause.QueryClause._switch_filter_action_prefixes"></a>
#### \_switch\_filter\_action\_prefixes
```python
| _switch_filter_action_prefixes(filter_clauses: List[FilterAction]) -> List[FilterAction]
```
Substitutes aliases for filter action if the complex key (whole relation str) is
present in alias_manager.
**Arguments**:
- `filter_clauses` (`List[FilterAction]`): raw list of actions
**Returns**:
`List[FilterAction]`: list of actions with aliases changed if needed
<a name="queryset.clause.QueryClause._verify_prefix_and_switch"></a>
#### \_verify\_prefix\_and\_switch
```python
| _verify_prefix_and_switch(action: "FilterAction") -> None
```
Helper to switch prefix to complex relation one if required
**Arguments**:
- `action` (`ormar.queryset.actions.filter_action.FilterAction`): action to switch prefix in

View File

@ -1,359 +0,0 @@
<a name="queryset.field_accessor"></a>
# queryset.field\_accessor
<a name="queryset.field_accessor.FieldAccessor"></a>
## FieldAccessor Objects
```python
class FieldAccessor()
```
Helper to access ormar fields directly from Model class also for nested
models attributes.
<a name="queryset.field_accessor.FieldAccessor.__bool__"></a>
#### \_\_bool\_\_
```python
| __bool__() -> bool
```
Hack to avoid pydantic name check from parent model, returns false
**Returns**:
`bool`: False
<a name="queryset.field_accessor.FieldAccessor.__getattr__"></a>
#### \_\_getattr\_\_
```python
| __getattr__(item: str) -> Any
```
Accessor return new accessor for each field and nested models.
Thanks to that operator overload is possible to use in filter.
**Arguments**:
- `item` (`str`): attribute name
**Returns**:
`ormar.queryset.field_accessor.FieldAccessor`: FieldAccessor for field or nested model
<a name="queryset.field_accessor.FieldAccessor.__eq__"></a>
#### \_\_eq\_\_
```python
| __eq__(other: Any) -> FilterGroup
```
overloaded to work as sql `column = <VALUE>`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.__ge__"></a>
#### \_\_ge\_\_
```python
| __ge__(other: Any) -> FilterGroup
```
overloaded to work as sql `column >= <VALUE>`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.__gt__"></a>
#### \_\_gt\_\_
```python
| __gt__(other: Any) -> FilterGroup
```
overloaded to work as sql `column > <VALUE>`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.__le__"></a>
#### \_\_le\_\_
```python
| __le__(other: Any) -> FilterGroup
```
overloaded to work as sql `column <= <VALUE>`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.__lt__"></a>
#### \_\_lt\_\_
```python
| __lt__(other: Any) -> FilterGroup
```
overloaded to work as sql `column < <VALUE>`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.__mod__"></a>
#### \_\_mod\_\_
```python
| __mod__(other: Any) -> FilterGroup
```
overloaded to work as sql `column LIKE '%<VALUE>%'`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.__lshift__"></a>
#### \_\_lshift\_\_
```python
| __lshift__(other: Any) -> FilterGroup
```
overloaded to work as sql `column IN (<VALUE1>, <VALUE2>,...)`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.__rshift__"></a>
#### \_\_rshift\_\_
```python
| __rshift__(other: Any) -> FilterGroup
```
overloaded to work as sql `column IS NULL`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.in_"></a>
#### in\_
```python
| in_(other: Any) -> FilterGroup
```
works as sql `column IN (<VALUE1>, <VALUE2>,...)`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.iexact"></a>
#### iexact
```python
| iexact(other: Any) -> FilterGroup
```
works as sql `column = <VALUE>` case-insensitive
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.contains"></a>
#### contains
```python
| contains(other: Any) -> FilterGroup
```
works as sql `column LIKE '%<VALUE>%'`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.icontains"></a>
#### icontains
```python
| icontains(other: Any) -> FilterGroup
```
works as sql `column LIKE '%<VALUE>%'` case-insensitive
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.startswith"></a>
#### startswith
```python
| startswith(other: Any) -> FilterGroup
```
works as sql `column LIKE '<VALUE>%'`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.istartswith"></a>
#### istartswith
```python
| istartswith(other: Any) -> FilterGroup
```
works as sql `column LIKE '%<VALUE>'` case-insensitive
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.endswith"></a>
#### endswith
```python
| endswith(other: Any) -> FilterGroup
```
works as sql `column LIKE '%<VALUE>'`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.iendswith"></a>
#### iendswith
```python
| iendswith(other: Any) -> FilterGroup
```
works as sql `column LIKE '%<VALUE>'` case-insensitive
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.isnull"></a>
#### isnull
```python
| isnull(other: Any) -> FilterGroup
```
works as sql `column IS NULL` or `IS NOT NULL`
**Arguments**:
- `other` (`str`): value to check agains operator
**Returns**:
`ormar.queryset.clause.FilterGroup`: FilterGroup for operator
<a name="queryset.field_accessor.FieldAccessor.asc"></a>
#### asc
```python
| asc() -> OrderAction
```
works as sql `column asc`
**Returns**:
`ormar.queryset.actions.OrderGroup`: OrderGroup for operator
<a name="queryset.field_accessor.FieldAccessor.desc"></a>
#### desc
```python
| desc() -> OrderAction
```
works as sql `column desc`
**Returns**:
`ormar.queryset.actions.OrderGroup`: OrderGroup for operator

View File

@ -1,29 +0,0 @@
<a name="queryset.filter_query"></a>
# queryset.filter\_query
<a name="queryset.filter_query.FilterQuery"></a>
## FilterQuery Objects
```python
class FilterQuery()
```
Modifies the select query with given list of where/filter clauses.
<a name="queryset.filter_query.FilterQuery.apply"></a>
#### apply
```python
| apply(expr: sqlalchemy.sql.select) -> sqlalchemy.sql.select
```
Applies all filter clauses if set.
**Arguments**:
- `expr` (`sqlalchemy.sql.selectable.Select`): query to modify
**Returns**:
`sqlalchemy.sql.selectable.Select`: modified query

View File

@ -1,229 +0,0 @@
<a name="queryset.join"></a>
# queryset.join
<a name="queryset.join.SqlJoin"></a>
## SqlJoin Objects
```python
class SqlJoin()
```
<a name="queryset.join.SqlJoin.alias_manager"></a>
#### alias\_manager
```python
| @property
| alias_manager() -> AliasManager
```
Shortcut for ormar's model AliasManager stored on Meta.
**Returns**:
`AliasManager`: alias manager from model's Meta
<a name="queryset.join.SqlJoin.to_table"></a>
#### to\_table
```python
| @property
| to_table() -> sqlalchemy.Table
```
Shortcut to table name of the next model
**Returns**:
`str`: name of the target table
<a name="queryset.join.SqlJoin._on_clause"></a>
#### \_on\_clause
```python
| _on_clause(previous_alias: str, from_clause: str, to_clause: str) -> text
```
Receives aliases and names of both ends of the join and combines them
into one text clause used in joins.
**Arguments**:
- `previous_alias` (`str`): alias of previous table
- `from_clause` (`str`): from table name
- `to_clause` (`str`): to table name
**Returns**:
`sqlalchemy.text`: clause combining all strings
<a name="queryset.join.SqlJoin.build_join"></a>
#### build\_join
```python
| build_join() -> Tuple[List, sqlalchemy.sql.select, List, OrderedDict]
```
Main external access point for building a join.
Splits the join definition, updates fields and exclude_fields if needed,
handles switching to through models for m2m relations, returns updated lists of
used_aliases and sort_orders.
**Returns**:
`Tuple[List[str], Join, List[TextClause], collections.OrderedDict]`: list of used aliases, select from, list of aliased columns, sort orders
<a name="queryset.join.SqlJoin._forward_join"></a>
#### \_forward\_join
```python
| _forward_join() -> None
```
Process actual join.
Registers complex relation join on encountering of the duplicated alias.
<a name="queryset.join.SqlJoin._process_following_joins"></a>
#### \_process\_following\_joins
```python
| _process_following_joins() -> None
```
Iterates through nested models to create subsequent joins.
<a name="queryset.join.SqlJoin._process_deeper_join"></a>
#### \_process\_deeper\_join
```python
| _process_deeper_join(related_name: str, remainder: Any) -> None
```
Creates nested recurrent instance of SqlJoin for each nested join table,
updating needed return params here as a side effect.
Updated are:
* self.used_aliases,
* self.select_from,
* self.columns,
* self.sorted_orders,
**Arguments**:
- `related_name` (`str`): name of the relation to follow
- `remainder` (`Any`): deeper tables if there are more nested joins
<a name="queryset.join.SqlJoin._process_m2m_through_table"></a>
#### \_process\_m2m\_through\_table
```python
| _process_m2m_through_table() -> None
```
Process Through table of the ManyToMany relation so that source table is
linked to the through table (one additional join)
Replaces needed parameters like:
* self.next_model,
* self.next_alias,
* self.relation_name,
* self.own_alias,
* self.target_field
To point to through model
<a name="queryset.join.SqlJoin._process_m2m_related_name_change"></a>
#### \_process\_m2m\_related\_name\_change
```python
| _process_m2m_related_name_change(reverse: bool = False) -> str
```
Extracts relation name to link join through the Through model declared on
relation field.
Changes the same names in order_by queries if they are present.
**Arguments**:
- `reverse` (`bool`): flag if it's on_clause lookup - use reverse fields
**Returns**:
`str`: new relation name switched to through model field
<a name="queryset.join.SqlJoin._process_join"></a>
#### \_process\_join
```python
| _process_join() -> None
```
Resolves to and from column names and table names.
Produces on_clause.
Performs actual join updating select_from parameter.
Adds aliases of required column to list of columns to include in query.
Updates the used aliases list directly.
Process order_by causes for non m2m relations.
<a name="queryset.join.SqlJoin._verify_allowed_order_field"></a>
#### \_verify\_allowed\_order\_field
```python
| _verify_allowed_order_field(order_by: str) -> None
```
Verifies if proper field string is used.
**Arguments**:
- `order_by` (`str`): string with order by definition
<a name="queryset.join.SqlJoin._get_alias_and_model"></a>
#### \_get\_alias\_and\_model
```python
| _get_alias_and_model(order_by: str) -> Tuple[str, Type["Model"]]
```
Returns proper model and alias to be applied in the clause.
**Arguments**:
- `order_by` (`str`): string with order by definition
**Returns**:
`Tuple[str, Type["Model"]]`: alias and model to be used in clause
<a name="queryset.join.SqlJoin._get_order_bys"></a>
#### \_get\_order\_bys
```python
| _get_order_bys() -> None
```
Triggers construction of order bys if they are given.
Otherwise by default each table is sorted by a primary key column asc.
<a name="queryset.join.SqlJoin._get_to_and_from_keys"></a>
#### \_get\_to\_and\_from\_keys
```python
| _get_to_and_from_keys() -> Tuple[str, str]
```
Based on the relation type, name of the relation and previous models and parts
stored in JoinParameters it resolves the current to and from keys, which are
different for ManyToMany relation, ForeignKey and reverse related of relations.
**Returns**:
`Tuple[str, str]`: to key and from key

View File

@ -1,29 +0,0 @@
<a name="queryset.limit_query"></a>
# queryset.limit\_query
<a name="queryset.limit_query.LimitQuery"></a>
## LimitQuery Objects
```python
class LimitQuery()
```
Modifies the select query with limit clause.
<a name="queryset.limit_query.LimitQuery.apply"></a>
#### apply
```python
| apply(expr: sqlalchemy.sql.select) -> sqlalchemy.sql.select
```
Applies the limit clause.
**Arguments**:
- `expr` (`sqlalchemy.sql.selectable.Select`): query to modify
**Returns**:
`sqlalchemy.sql.selectable.Select`: modified query

View File

@ -1,29 +0,0 @@
<a name="queryset.offset_query"></a>
# queryset.offset\_query
<a name="queryset.offset_query.OffsetQuery"></a>
## OffsetQuery Objects
```python
class OffsetQuery()
```
Modifies the select query with offset if set
<a name="queryset.offset_query.OffsetQuery.apply"></a>
#### apply
```python
| apply(expr: sqlalchemy.sql.select) -> sqlalchemy.sql.select
```
Applies the offset clause.
**Arguments**:
- `expr` (`sqlalchemy.sql.selectable.Select`): query to modify
**Returns**:
`sqlalchemy.sql.selectable.Select`: modified query

View File

@ -1,29 +0,0 @@
<a name="queryset.order_query"></a>
# queryset.order\_query
<a name="queryset.order_query.OrderQuery"></a>
## OrderQuery Objects
```python
class OrderQuery()
```
Modifies the select query with given list of order_by clauses.
<a name="queryset.order_query.OrderQuery.apply"></a>
#### apply
```python
| apply(expr: sqlalchemy.sql.select) -> sqlalchemy.sql.select
```
Applies all order_by clauses if set.
**Arguments**:
- `expr` (`sqlalchemy.sql.selectable.Select`): query to modify
**Returns**:
`sqlalchemy.sql.selectable.Select`: modified query

View File

@ -1,322 +0,0 @@
<a name="queryset.prefetch_query"></a>
# queryset.prefetch\_query
<a name="queryset.prefetch_query.sort_models"></a>
#### sort\_models
```python
sort_models(models: List["Model"], orders_by: Dict) -> List["Model"]
```
Since prefetch query gets all related models by ids the sorting needs to happen in
python. Since by default models are already sorted by id here we resort only if
order_by parameters was set.
**Arguments**:
- `models` (`List[tests.test_prefetch_related.Division]`): list of models already fetched from db
- `orders_by` (`Dict[str, str]`): order by dictionary
**Returns**:
`List[tests.test_prefetch_related.Division]`: sorted list of models
<a name="queryset.prefetch_query.set_children_on_model"></a>
#### set\_children\_on\_model
```python
set_children_on_model(model: "Model", related: str, children: Dict, model_id: int, models: Dict, orders_by: Dict) -> None
```
Extract ids of child models by given relation id key value.
Based on those ids the actual children model instances are fetched from
already fetched data.
If needed the child models are resorted according to passed orders_by dict.
Also relation is registered as each child is set as parent related field name value.
**Arguments**:
- `model` (`Model`): parent model instance
- `related` (`str`): name of the related field
- `children` (`Dict[int, set]`): dictionary of children ids/ related field value
- `model_id` (`int`): id of the model on which children should be set
- `models` (`Dict`): dictionary of child models instances
- `orders_by` (`Dict`): order_by dictionary
<a name="queryset.prefetch_query.PrefetchQuery"></a>
## PrefetchQuery Objects
```python
class PrefetchQuery()
```
Query used to fetch related models in subsequent queries.
Each model is fetched only ones by the name of the relation.
That means that for each prefetch_related entry next query is issued to database.
<a name="queryset.prefetch_query.PrefetchQuery.prefetch_related"></a>
#### prefetch\_related
```python
| async prefetch_related(models: Sequence["Model"], rows: List) -> Sequence["Model"]
```
Main entry point for prefetch_query.
Receives list of already initialized parent models with all children from
select_related already populated. Receives also list of row sql result rows
as it's quicker to extract ids that way instead of calling each model.
Returns list with related models already prefetched and set.
**Arguments**:
- `models` (`List[Model]`): list of already instantiated models from main query
- `rows` (`List[sqlalchemy.engine.result.RowProxy]`): row sql result of the main query before the prefetch
**Returns**:
`List[Model]`: list of models with children prefetched
<a name="queryset.prefetch_query.PrefetchQuery._extract_ids_from_raw_data"></a>
#### \_extract\_ids\_from\_raw\_data
```python
| _extract_ids_from_raw_data(parent_model: Type["Model"], column_name: str) -> Set
```
Iterates over raw rows and extract id values of relation columns by using
prefixed column name.
**Arguments**:
- `parent_model` (`Type[Model]`): ormar model class
- `column_name` (`str`): name of the relation column which is a key column
**Returns**:
`set`: set of ids of related model that should be extracted
<a name="queryset.prefetch_query.PrefetchQuery._extract_ids_from_preloaded_models"></a>
#### \_extract\_ids\_from\_preloaded\_models
```python
| _extract_ids_from_preloaded_models(parent_model: Type["Model"], column_name: str) -> Set
```
Extracts relation ids from already populated models if they were included
in the original query before.
**Arguments**:
- `parent_model` (`Type["Model"]`): model from which related ids should be extracted
- `column_name` (`str`): name of the relation column which is a key column
**Returns**:
`set`: set of ids of related model that should be extracted
<a name="queryset.prefetch_query.PrefetchQuery._extract_required_ids"></a>
#### \_extract\_required\_ids
```python
| _extract_required_ids(parent_model: Type["Model"], reverse: bool, related: str) -> Set
```
Delegates extraction of the fields to either get ids from raw sql response
or from already populated models.
**Arguments**:
- `parent_model` (`Type["Model"]`): model from which related ids should be extracted
- `reverse` (`bool`): flag if the relation is reverse
- `related` (`str`): name of the field with relation
**Returns**:
`set`: set of ids of related model that should be extracted
<a name="queryset.prefetch_query.PrefetchQuery._get_filter_for_prefetch"></a>
#### \_get\_filter\_for\_prefetch
```python
| _get_filter_for_prefetch(parent_model: Type["Model"], target_model: Type["Model"], reverse: bool, related: str) -> List
```
Populates where clause with condition to return only models within the
set of extracted ids.
If there are no ids for relation the empty list is returned.
**Arguments**:
- `parent_model` (`Type["Model"]`): model from which related ids should be extracted
- `target_model` (`Type["Model"]`): model to which relation leads to
- `reverse` (`bool`): flag if the relation is reverse
- `related` (`str`): name of the field with relation
**Returns**:
`List[sqlalchemy.sql.elements.TextClause]`:
<a name="queryset.prefetch_query.PrefetchQuery._populate_nested_related"></a>
#### \_populate\_nested\_related
```python
| _populate_nested_related(model: "Model", prefetch_dict: Dict, orders_by: Dict) -> "Model"
```
Populates all related models children of parent model that are
included in prefetch query.
**Arguments**:
- `model` (`Model`): ormar model instance
- `prefetch_dict` (`Dict`): dictionary of models to prefetch
- `orders_by` (`Dict`): dictionary of order bys
**Returns**:
`Model`: model with children populated
<a name="queryset.prefetch_query.PrefetchQuery._prefetch_related_models"></a>
#### \_prefetch\_related\_models
```python
| async _prefetch_related_models(models: Sequence["Model"], rows: List) -> Sequence["Model"]
```
Main method of the query.
Translates select nad prefetch list into dictionaries to avoid querying the
same related models multiple times.
Keeps the list of already extracted models.
Extracts the related models from the database and later populate all children
on each of the parent models from list.
**Arguments**:
- `models` (`List[Model]`): list of parent models from main query
- `rows` (`List[sqlalchemy.engine.result.RowProxy]`): raw response from sql query
**Returns**:
`List[Model]`: list of models with prefetch children populated
<a name="queryset.prefetch_query.PrefetchQuery._extract_related_models"></a>
#### \_extract\_related\_models
```python
| async _extract_related_models(related: str, target_model: Type["Model"], prefetch_dict: Dict, select_dict: Dict, excludable: "ExcludableItems", orders_by: Dict) -> None
```
Constructs queries with required ids and extracts data with fields that should
be included/excluded.
Runs the queries against the database and populated dictionaries with ids and
with actual extracted children models.
Calls itself recurrently to extract deeper nested relations of related model.
**Arguments**:
- `related` (`str`): name of the relation
- `target_model` (`Type[Model]`): model to which relation leads to
- `prefetch_dict` (`Dict`): prefetch related list converted into dictionary
- `select_dict` (`Dict`): select related list converted into dictionary
- `fields` (`Union[Set[Any], Dict[Any, Any], None]`): fields to include
- `exclude_fields` (`Union[Set[Any], Dict[Any, Any], None]`): fields to exclude
- `orders_by` (`Dict`): dictionary of order bys clauses
**Returns**:
`None`: None
<a name="queryset.prefetch_query.PrefetchQuery._run_prefetch_query"></a>
#### \_run\_prefetch\_query
```python
| async _run_prefetch_query(target_field: "BaseField", excludable: "ExcludableItems", filter_clauses: List, related_field_name: str) -> Tuple[str, str, List]
```
Actually runs the queries against the database and populates the raw response
for given related model.
Returns table prefix as it's later needed to eventually initialize the children
models.
**Arguments**:
- `target_field` (`"BaseField"`): ormar field with relation definition
- `filter_clauses` (`List[sqlalchemy.sql.elements.TextClause]`): list of clauses, actually one clause with ids of relation
**Returns**:
`Tuple[str, List]`: table prefix and raw rows from sql response
<a name="queryset.prefetch_query.PrefetchQuery._get_select_related_if_apply"></a>
#### \_get\_select\_related\_if\_apply
```python
| @staticmethod
| _get_select_related_if_apply(related: str, select_dict: Dict) -> Dict
```
Extract nested related of select_related dictionary to extract models nested
deeper on related model and already loaded in select related query.
**Arguments**:
- `related` (`str`): name of the relation
- `select_dict` (`Dict`): dictionary of select related models in main query
**Returns**:
`Dict`: dictionary with nested related of select related
<a name="queryset.prefetch_query.PrefetchQuery._update_already_loaded_rows"></a>
#### \_update\_already\_loaded\_rows
```python
| _update_already_loaded_rows(target_field: "BaseField", prefetch_dict: Dict, orders_by: Dict) -> None
```
Updates models that are already loaded, usually children of children.
**Arguments**:
- `target_field` (`"BaseField"`): ormar field with relation definition
- `prefetch_dict` (`Dict`): dictionaries of related models to prefetch
- `orders_by` (`Dict`): dictionary of order by clauses by model
<a name="queryset.prefetch_query.PrefetchQuery._populate_rows"></a>
#### \_populate\_rows
```python
| _populate_rows(rows: List, target_field: "ForeignKeyField", parent_model: Type["Model"], table_prefix: str, exclude_prefix: str, excludable: "ExcludableItems", prefetch_dict: Dict, orders_by: Dict) -> None
```
Instantiates children models extracted from given relation.
Populates them with their own nested children if they are included in prefetch
query.
Sets the initialized models and ids of them under corresponding keys in
already_extracted dictionary. Later those instances will be fetched by ids
and set on the parent model after sorting if needed.
**Arguments**:
- `excludable` (`ExcludableItems`): structure of fields to include and exclude
- `rows` (`List[sqlalchemy.engine.result.RowProxy]`): raw sql response from the prefetch query
- `target_field` (`"BaseField"`): field with relation definition from parent model
- `parent_model` (`Type[Model]`): model with relation definition
- `table_prefix` (`str`): prefix of the target table from current relation
- `prefetch_dict` (`Dict`): dictionaries of related models to prefetch
- `orders_by` (`Dict`): dictionary of order by clauses by model

View File

@ -1,875 +0,0 @@
<a name="queryset.queryset"></a>
# queryset.queryset
<a name="queryset.queryset.QuerySet"></a>
## QuerySet Objects
```python
class QuerySet(Generic[T])
```
Main class to perform database queries, exposed on each model as objects attribute.
<a name="queryset.queryset.QuerySet.model_meta"></a>
#### model\_meta
```python
| @property
| model_meta() -> "ModelMeta"
```
Shortcut to model class Meta set on QuerySet model.
**Returns**:
`model Meta class`: Meta class of the model
<a name="queryset.queryset.QuerySet.model"></a>
#### model
```python
| @property
| model() -> Type["T"]
```
Shortcut to model class set on QuerySet.
**Returns**:
`Type[Model]`: model class
<a name="queryset.queryset.QuerySet.rebuild_self"></a>
#### rebuild\_self
```python
| rebuild_self(filter_clauses: List = None, exclude_clauses: List = None, select_related: List = None, limit_count: int = None, offset: int = None, excludable: "ExcludableItems" = None, order_bys: List = None, prefetch_related: List = None, limit_raw_sql: bool = None, proxy_source_model: Optional[Type["Model"]] = None) -> "QuerySet"
```
Method that returns new instance of queryset based on passed params,
all not passed params are taken from current values.
<a name="queryset.queryset.QuerySet._prefetch_related_models"></a>
#### \_prefetch\_related\_models
```python
| async _prefetch_related_models(models: List["T"], rows: List) -> List["T"]
```
Performs prefetch query for selected models names.
**Arguments**:
- `models` (`List[Model]`): list of already parsed main Models from main query
- `rows` (`List[sqlalchemy.engine.result.RowProxy]`): database rows from main query
**Returns**:
`List[Model]`: list of models with prefetch models populated
<a name="queryset.queryset.QuerySet._process_query_result_rows"></a>
#### \_process\_query\_result\_rows
```python
| _process_query_result_rows(rows: List) -> List["T"]
```
Process database rows and initialize ormar Model from each of the rows.
**Arguments**:
- `rows` (`List[sqlalchemy.engine.result.RowProxy]`): list of database rows from query result
**Returns**:
`List[Model]`: list of models
<a name="queryset.queryset.QuerySet._resolve_filter_groups"></a>
#### \_resolve\_filter\_groups
```python
| _resolve_filter_groups(groups: Any) -> Tuple[List[FilterGroup], List[str]]
```
Resolves filter groups to populate FilterAction params in group tree.
**Arguments**:
- `groups` (`Any`): tuple of FilterGroups
**Returns**:
`Tuple[List[FilterGroup], List[str]]`: list of resolver groups
<a name="queryset.queryset.QuerySet.check_single_result_rows_count"></a>
#### check\_single\_result\_rows\_count
```python
| @staticmethod
| check_single_result_rows_count(rows: Sequence[Optional["T"]]) -> None
```
Verifies if the result has one and only one row.
**Arguments**:
- `rows` (`List[Model]`): one element list of Models
<a name="queryset.queryset.QuerySet.database"></a>
#### database
```python
| @property
| database() -> databases.Database
```
Shortcut to models database from Meta class.
**Returns**:
`databases.Database`: database
<a name="queryset.queryset.QuerySet.table"></a>
#### table
```python
| @property
| table() -> sqlalchemy.Table
```
Shortcut to models table from Meta class.
**Returns**:
`sqlalchemy.Table`: database table
<a name="queryset.queryset.QuerySet.build_select_expression"></a>
#### build\_select\_expression
```python
| build_select_expression(limit: int = None, offset: int = None, order_bys: List = None) -> sqlalchemy.sql.select
```
Constructs the actual database query used in the QuerySet.
If any of the params is not passed the QuerySet own value is used.
**Arguments**:
- `limit` (`int`): number to limit the query
- `offset` (`int`): number to offset by
- `order_bys` (`List`): list of order-by fields names
**Returns**:
`sqlalchemy.sql.selectable.Select`: built sqlalchemy select expression
<a name="queryset.queryset.QuerySet.filter"></a>
#### filter
```python
| filter(*args: Any, *, _exclude: bool = False, **kwargs: Any) -> "QuerySet[T]"
```
Allows you to filter by any `Model` attribute/field
as well as to fetch instances, with a filter across an FK relationship.
You can use special filter suffix to change the filter operands:
* exact - like `album__name__exact='Malibu'` (exact match)
* iexact - like `album__name__iexact='malibu'` (exact match case insensitive)
* contains - like `album__name__contains='Mal'` (sql like)
* icontains - like `album__name__icontains='mal'` (sql like case insensitive)
* in - like `album__name__in=['Malibu', 'Barclay']` (sql in)
* isnull - like `album__name__isnull=True` (sql is null)
(isnotnull `album__name__isnull=False` (sql is not null))
* gt - like `position__gt=3` (sql >)
* gte - like `position__gte=3` (sql >=)
* lt - like `position__lt=3` (sql <)
* lte - like `position__lte=3` (sql <=)
* startswith - like `album__name__startswith='Mal'` (exact start match)
* istartswith - like `album__name__istartswith='mal'` (case insensitive)
* endswith - like `album__name__endswith='ibu'` (exact end match)
* iendswith - like `album__name__iendswith='IBU'` (case insensitive)
Note that you can also use python style filters - check the docs!
**Arguments**:
- `_exclude` (`bool`): flag if it should be exclude or filter
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`QuerySet`: filtered QuerySet
<a name="queryset.queryset.QuerySet.exclude"></a>
#### exclude
```python
| exclude(*args: Any, **kwargs: Any) -> "QuerySet[T]"
```
Works exactly the same as filter and all modifiers (suffixes) are the same,
but returns a *not* condition.
So if you use `filter(name='John')` which is `where name = 'John'` in SQL,
the `exclude(name='John')` equals to `where name <> 'John'`
Note that all conditions are joined so if you pass multiple values it
becomes a union of conditions.
`exclude(name='John', age>=35)` will become
`where not (name='John' and age>=35)`
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`QuerySet`: filtered QuerySet
<a name="queryset.queryset.QuerySet.select_related"></a>
#### select\_related
```python
| select_related(related: Union[List, str]) -> "QuerySet[T]"
```
Allows to prefetch related models during the same query.
**With `select_related` always only one query is run against the database**,
meaning that one (sometimes complicated) join is generated and later nested
models are processed in python.
To fetch related model use `ForeignKey` names.
To chain related `Models` relation use double underscores between names.
**Arguments**:
- `related` (`Union[List, str]`): list of relation field names, can be linked by '__' to nest
**Returns**:
`QuerySet`: QuerySet
<a name="queryset.queryset.QuerySet.select_all"></a>
#### select\_all
```python
| select_all(follow: bool = False) -> "QuerySet[T]"
```
By default adds only directly related models.
If follow=True is set it adds also related models of related models.
To not get stuck in an infinite loop as related models also keep a relation
to parent model visited models set is kept.
That way already visited models that are nested are loaded, but the load do not
follow them inside. So Model A -> Model B -> Model C -> Model A -> Model X
will load second Model A but will never follow into Model X.
Nested relations of those kind need to be loaded manually.
**Arguments**:
by default only directly related models are saved
with follow=True also related models of related models are saved
- `follow` (`bool`): flag to trigger deep save -
**Returns**:
`Model`: reloaded Model
<a name="queryset.queryset.QuerySet.prefetch_related"></a>
#### prefetch\_related
```python
| prefetch_related(related: Union[List, str]) -> "QuerySet[T]"
```
Allows to prefetch related models during query - but opposite to
`select_related` each subsequent model is fetched in a separate database query.
**With `prefetch_related` always one query per Model is run against the
database**, meaning that you will have multiple queries executed one
after another.
To fetch related model use `ForeignKey` names.
To chain related `Models` relation use double underscores between names.
**Arguments**:
- `related` (`Union[List, str]`): list of relation field names, can be linked by '__' to nest
**Returns**:
`QuerySet`: QuerySet
<a name="queryset.queryset.QuerySet.fields"></a>
#### fields
```python
| fields(columns: Union[List, str, Set, Dict], _is_exclude: bool = False) -> "QuerySet[T]"
```
With `fields()` you can select subset of model columns to limit the data load.
Note that `fields()` and `exclude_fields()` works both for main models
(on normal queries like `get`, `all` etc.)
as well as `select_related` and `prefetch_related`
models (with nested notation).
You can select specified fields by passing a `str, List[str], Set[str] or
dict` with nested definition.
To include related models use notation
`{related_name}__{column}[__{optional_next} etc.]`.
`fields()` can be called several times, building up the columns to select.
If you include related models into `select_related()` call but you won't specify
columns for those models in fields - implies a list of all fields for
those nested models.
Mandatory fields cannot be excluded as it will raise `ValidationError`,
to exclude a field it has to be nullable.
Pk column cannot be excluded - it's always auto added even if
not explicitly included.
You can also pass fields to include as dictionary or set.
To mark a field as included in a dictionary use it's name as key
and ellipsis as value.
To traverse nested models use nested dictionaries.
To include fields at last level instead of nested dictionary a set can be used.
To include whole nested model specify model related field name and ellipsis.
**Arguments**:
- `_is_exclude` (`bool`): flag if it's exclude or include operation
- `columns` (`Union[List, str, Set, Dict]`): columns to include
**Returns**:
`QuerySet`: QuerySet
<a name="queryset.queryset.QuerySet.exclude_fields"></a>
#### exclude\_fields
```python
| exclude_fields(columns: Union[List, str, Set, Dict]) -> "QuerySet[T]"
```
With `exclude_fields()` you can select subset of model columns that will
be excluded to limit the data load.
It's the opposite of `fields()` method so check documentation above
to see what options are available.
Especially check above how you can pass also nested dictionaries
and sets as a mask to exclude fields from whole hierarchy.
Note that `fields()` and `exclude_fields()` works both for main models
(on normal queries like `get`, `all` etc.)
as well as `select_related` and `prefetch_related` models
(with nested notation).
Mandatory fields cannot be excluded as it will raise `ValidationError`,
to exclude a field it has to be nullable.
Pk column cannot be excluded - it's always auto added even
if explicitly excluded.
**Arguments**:
- `columns` (`Union[List, str, Set, Dict]`): columns to exclude
**Returns**:
`QuerySet`: QuerySet
<a name="queryset.queryset.QuerySet.order_by"></a>
#### order\_by
```python
| order_by(columns: Union[List, str, OrderAction]) -> "QuerySet[T]"
```
With `order_by()` you can order the results from database based on your
choice of fields.
You can provide a string with field name or list of strings with fields names.
Ordering in sql will be applied in order of names you provide in order_by.
By default if you do not provide ordering `ormar` explicitly orders by
all primary keys
If you are sorting by nested models that causes that the result rows are
unsorted by the main model `ormar` will combine those children rows into
one main model.
The main model will never duplicate in the result
To order by main model field just provide a field name
To sort on nested models separate field names with dunder '__'.
You can sort this way across all relation types -> `ForeignKey`,
reverse virtual FK and `ManyToMany` fields.
To sort in descending order provide a hyphen in front of the field name
**Arguments**:
- `columns` (`Union[List, str]`): columns by which models should be sorted
**Returns**:
`QuerySet`: QuerySet
<a name="queryset.queryset.QuerySet.values"></a>
#### values
```python
| async values(fields: Union[List, str, Set, Dict] = None, exclude_through: bool = False, _as_dict: bool = True, _flatten: bool = False) -> List
```
Return a list of dictionaries with column values in order of the fields
passed or all fields from queried models.
To filter for given row use filter/exclude methods before values,
to limit number of rows use limit/offset or paginate before values.
Note that it always return a list even for one row from database.
**Arguments**:
- `exclude_through` (`bool`): flag if through models should be excluded
- `_flatten` (`bool`): internal parameter to flatten one element tuples
- `_as_dict` (`bool`): internal parameter if return dict or tuples
- `fields` (`Union[List, str, Set, Dict]`): field name or list of field names to extract from db
<a name="queryset.queryset.QuerySet.values_list"></a>
#### values\_list
```python
| async values_list(fields: Union[List, str, Set, Dict] = None, flatten: bool = False, exclude_through: bool = False) -> List
```
Return a list of tuples with column values in order of the fields passed or
all fields from queried models.
When one field is passed you can flatten the list of tuples into list of values
of that single field.
To filter for given row use filter/exclude methods before values,
to limit number of rows use limit/offset or paginate before values.
Note that it always return a list even for one row from database.
**Arguments**:
- `exclude_through` (`bool`): flag if through models should be excluded
- `fields` (`Union[str, List[str]]`): field name or list of field names to extract from db
- `flatten` (`bool`): when one field is passed you can flatten the list of tuples
<a name="queryset.queryset.QuerySet.exists"></a>
#### exists
```python
| async exists() -> bool
```
Returns a bool value to confirm if there are rows matching the given criteria
(applied with `filter` and `exclude` if set).
**Returns**:
`bool`: result of the check
<a name="queryset.queryset.QuerySet.count"></a>
#### count
```python
| async count(distinct: bool = True) -> int
```
Returns number of rows matching the given criteria
(applied with `filter` and `exclude` if set before).
If `distinct` is `True` (the default), this will return the number of primary rows selected. If `False`,
the count will be the total number of rows returned
(including extra rows for `one-to-many` or `many-to-many` left `select_related` table joins).
`False` is the legacy (buggy) behavior for workflows that depend on it.
**Arguments**:
- `distinct` (`bool`): flag if the primary table rows should be distinct or not
**Returns**:
`int`: number of rows
<a name="queryset.queryset.QuerySet.max"></a>
#### max
```python
| async max(columns: Union[str, List[str]]) -> Any
```
Returns max value of columns for rows matching the given criteria
(applied with `filter` and `exclude` if set before).
**Returns**:
`Any`: max value of column(s)
<a name="queryset.queryset.QuerySet.min"></a>
#### min
```python
| async min(columns: Union[str, List[str]]) -> Any
```
Returns min value of columns for rows matching the given criteria
(applied with `filter` and `exclude` if set before).
**Returns**:
`Any`: min value of column(s)
<a name="queryset.queryset.QuerySet.sum"></a>
#### sum
```python
| async sum(columns: Union[str, List[str]]) -> Any
```
Returns sum value of columns for rows matching the given criteria
(applied with `filter` and `exclude` if set before).
**Returns**:
`int`: sum value of columns
<a name="queryset.queryset.QuerySet.avg"></a>
#### avg
```python
| async avg(columns: Union[str, List[str]]) -> Any
```
Returns avg value of columns for rows matching the given criteria
(applied with `filter` and `exclude` if set before).
**Returns**:
`Union[int, float, List]`: avg value of columns
<a name="queryset.queryset.QuerySet.update"></a>
#### update
```python
| async update(each: bool = False, **kwargs: Any) -> int
```
Updates the model table after applying the filters from kwargs.
You have to either pass a filter to narrow down a query or explicitly pass
each=True flag to affect whole table.
**Arguments**:
- `each` (`bool`): flag if whole table should be affected if no filter is passed
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`int`: number of updated rows
<a name="queryset.queryset.QuerySet.delete"></a>
#### delete
```python
| async delete(*args: Any, *, each: bool = False, **kwargs: Any) -> int
```
Deletes from the model table after applying the filters from kwargs.
You have to either pass a filter to narrow down a query or explicitly pass
each=True flag to affect whole table.
**Arguments**:
- `each` (`bool`): flag if whole table should be affected if no filter is passed
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`int`: number of deleted rows
<a name="queryset.queryset.QuerySet.paginate"></a>
#### paginate
```python
| paginate(page: int, page_size: int = 20) -> "QuerySet[T]"
```
You can paginate the result which is a combination of offset and limit clauses.
Limit is set to page size and offset is set to (page-1) * page_size.
**Arguments**:
- `page_size` (`int`): numbers of items per page
- `page` (`int`): page number
**Returns**:
`QuerySet`: QuerySet
<a name="queryset.queryset.QuerySet.limit"></a>
#### limit
```python
| limit(limit_count: int, limit_raw_sql: bool = None) -> "QuerySet[T]"
```
You can limit the results to desired number of parent models.
To limit the actual number of database query rows instead of number of main
models use the `limit_raw_sql` parameter flag, and set it to `True`.
**Arguments**:
- `limit_raw_sql` (`bool`): flag if raw sql should be limited
- `limit_count` (`int`): number of models to limit
**Returns**:
`QuerySet`: QuerySet
<a name="queryset.queryset.QuerySet.offset"></a>
#### offset
```python
| offset(offset: int, limit_raw_sql: bool = None) -> "QuerySet[T]"
```
You can also offset the results by desired number of main models.
To offset the actual number of database query rows instead of number of main
models use the `limit_raw_sql` parameter flag, and set it to `True`.
**Arguments**:
- `limit_raw_sql` (`bool`): flag if raw sql should be offset
- `offset` (`int`): numbers of models to offset
**Returns**:
`QuerySet`: QuerySet
<a name="queryset.queryset.QuerySet.first"></a>
#### first
```python
| async first(*args: Any, **kwargs: Any) -> "T"
```
Gets the first row from the db ordered by primary key column ascending.
**Raises**:
- `NoMatch`: if no rows are returned
- `MultipleMatches`: if more than 1 row is returned.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: returned model
<a name="queryset.queryset.QuerySet.get_or_none"></a>
#### get\_or\_none
```python
| async get_or_none(*args: Any, **kwargs: Any) -> Optional["T"]
```
Get's the first row from the db meeting the criteria set by kwargs.
If no criteria set it will return the last row in db sorted by pk.
Passing a criteria is actually calling filter(*args, **kwargs) method described
below.
If not match is found None will be returned.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: returned model
<a name="queryset.queryset.QuerySet.get"></a>
#### get
```python
| async get(*args: Any, **kwargs: Any) -> "T"
```
Get's the first row from the db meeting the criteria set by kwargs.
If no criteria set it will return the last row in db sorted by pk.
Passing a criteria is actually calling filter(*args, **kwargs) method described
below.
**Raises**:
- `NoMatch`: if no rows are returned
- `MultipleMatches`: if more than 1 row is returned.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: returned model
<a name="queryset.queryset.QuerySet.get_or_create"></a>
#### get\_or\_create
```python
| async get_or_create(_defaults: Optional[Dict[str, Any]] = None, *args: Any, **kwargs: Any) -> Tuple["T", bool]
```
Combination of create and get methods.
Tries to get a row meeting the criteria for kwargs
and if `NoMatch` exception is raised
it creates a new one with given kwargs.
Passing a criteria is actually calling filter(*args, **kwargs) method described
below.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: returned or created Model
<a name="queryset.queryset.QuerySet.update_or_create"></a>
#### update\_or\_create
```python
| async update_or_create(**kwargs: Any) -> "T"
```
Updates the model, or in case there is no match in database creates a new one.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: updated or created model
<a name="queryset.queryset.QuerySet.all"></a>
#### all
```python
| async all(*args: Any, **kwargs: Any) -> List["T"]
```
Returns all rows from a database for given model for set filter options.
Passing args and/or kwargs is a shortcut and equals to calling
`filter(*args, **kwargs).all()`.
If there are no rows meeting the criteria an empty list is returned.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`List[Model]`: list of returned models
<a name="queryset.queryset.QuerySet.create"></a>
#### create
```python
| async create(**kwargs: Any) -> "T"
```
Creates the model instance, saves it in a database and returns the updates model
(with pk populated if not passed and autoincrement is set).
The allowed kwargs are `Model` fields names and proper value types.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: created model
<a name="queryset.queryset.QuerySet.bulk_create"></a>
#### bulk\_create
```python
| async bulk_create(objects: List["T"]) -> None
```
Performs a bulk update in one database session to speed up the process.
Allows you to create multiple objects at once.
A valid list of `Model` objects needs to be passed.
Bulk operations do not send signals.
**Arguments**:
- `objects` (`List[Model]`): list of ormar models already initialized and ready to save.
<a name="queryset.queryset.QuerySet.bulk_update"></a>
#### bulk\_update
```python
| async bulk_update(objects: List["T"], columns: List[str] = None) -> None
```
Performs bulk update in one database session to speed up the process.
Allows to update multiple instance at once.
All `Models` passed need to have primary key column populated.
You can also select which fields to update by passing `columns` list
as a list of string names.
Bulk operations do not send signals.
**Arguments**:
- `objects` (`List[Model]`): list of ormar models
- `columns` (`List[str]`): list of columns to update

View File

@ -1,129 +0,0 @@
<a name="queryset.query"></a>
# queryset.query
<a name="queryset.query.Query"></a>
## Query Objects
```python
class Query()
```
<a name="queryset.query.Query._init_sorted_orders"></a>
#### \_init\_sorted\_orders
```python
| _init_sorted_orders() -> None
```
Initialize empty order_by dict to be populated later during the query call
<a name="queryset.query.Query.apply_order_bys_for_primary_model"></a>
#### apply\_order\_bys\_for\_primary\_model
```python
| apply_order_bys_for_primary_model() -> None
```
Applies order_by queries on main model when it's used as a subquery.
That way the subquery with limit and offset only on main model has proper
sorting applied and correct models are fetched.
<a name="queryset.query.Query._apply_default_model_sorting"></a>
#### \_apply\_default\_model\_sorting
```python
| _apply_default_model_sorting() -> None
```
Applies orders_by from model Meta class (if provided), if it was not provided
it was filled by metaclass so it's always there and falls back to pk column
<a name="queryset.query.Query._pagination_query_required"></a>
#### \_pagination\_query\_required
```python
| _pagination_query_required() -> bool
```
Checks if limit or offset are set, the flag limit_sql_raw is not set
and query has select_related applied. Otherwise we can limit/offset normally
at the end of whole query.
**Returns**:
`bool`: result of the check
<a name="queryset.query.Query.build_select_expression"></a>
#### build\_select\_expression
```python
| build_select_expression() -> Tuple[sqlalchemy.sql.select, List[str]]
```
Main entry point from outside (after proper initialization).
Extracts columns list to fetch,
construct all required joins for select related,
then applies all conditional and sort clauses.
Returns ready to run query with all joins and clauses.
**Returns**:
`sqlalchemy.sql.selectable.Select`: ready to run query with all joins and clauses.
<a name="queryset.query.Query._build_pagination_condition"></a>
#### \_build\_pagination\_condition
```python
| _build_pagination_condition() -> Tuple[
| sqlalchemy.sql.expression.TextClause, sqlalchemy.sql.expression.TextClause
| ]
```
In order to apply limit and offset on main table in join only
(otherwise you can get only partially constructed main model
if number of children exceeds the applied limit and select_related is used)
Used also to change first and get() without argument behaviour.
Needed only if limit or offset are set, the flag limit_sql_raw is not set
and query has select_related applied. Otherwise we can limit/offset normally
at the end of whole query.
The condition is added to filters to filter out desired number of main model
primary key values. Whole query is used to determine the values.
<a name="queryset.query.Query._apply_expression_modifiers"></a>
#### \_apply\_expression\_modifiers
```python
| _apply_expression_modifiers(expr: sqlalchemy.sql.select) -> sqlalchemy.sql.select
```
Receives the select query (might be join) and applies:
* Filter clauses
* Exclude filter clauses
* Limit clauses
* Offset clauses
* Order by clauses
Returns complete ready to run query.
**Arguments**:
- `expr` (`sqlalchemy.sql.selectable.Select`): select expression before clauses
**Returns**:
`sqlalchemy.sql.selectable.Select`: expresion with all present clauses applied
<a name="queryset.query.Query._reset_query_parameters"></a>
#### \_reset\_query\_parameters
```python
| _reset_query_parameters() -> None
```
Although it should be created each time before the call we reset the key params
anyway.

View File

@ -1,139 +0,0 @@
<a name="queryset.reverse_alias_resolver"></a>
# queryset.reverse\_alias\_resolver
<a name="queryset.reverse_alias_resolver.ReverseAliasResolver"></a>
## ReverseAliasResolver Objects
```python
class ReverseAliasResolver()
```
Class is used to reverse resolve table aliases into relation strings
to parse raw data columns and replace table prefixes with full relation string
<a name="queryset.reverse_alias_resolver.ReverseAliasResolver.resolve_columns"></a>
#### resolve\_columns
```python
| resolve_columns(columns_names: List[str]) -> Dict
```
Takes raw query prefixed column and resolves the prefixes to
relation strings (relation names connected with dunders).
**Arguments**:
- `columns_names` (`List[str]`): list of column names with prefixes from query
**Returns**:
`Union[None, Dict[str, str]]`: dictionary of prefix: resolved names
<a name="queryset.reverse_alias_resolver.ReverseAliasResolver._resolve_column_with_prefix"></a>
#### \_resolve\_column\_with\_prefix
```python
| _resolve_column_with_prefix(column_name: str, prefix: str) -> None
```
Takes the prefixed column, checks if field should be excluded, and if not
it proceeds to replace prefix of a table with full relation string.
Sample: translates: "xsd12df_name" -> into: "posts__user__name"
**Arguments**:
- `column_name` (`str`): prefixed name of the column
- `prefix` (`str`): extracted prefix
<a name="queryset.reverse_alias_resolver.ReverseAliasResolver._check_if_field_is_excluded"></a>
#### \_check\_if\_field\_is\_excluded
```python
| _check_if_field_is_excluded(prefix: str, field: "ForeignKeyField", is_through: bool) -> bool
```
Checks if given relation is excluded in current query.
Note that in contrary to other queryset methods here you can exclude the
in-between models but keep the end columns, which does not make sense
when parsing the raw data into models.
So in relation category -> category_x_post -> post -> user you can exclude
category_x_post and post models but can keep the user one. (in ormar model
context that is not possible as if you would exclude through and post model
there would be no way to reach user model).
Exclusions happen on a model before the current one, so we need to move back
in chain of model by one or by two (m2m relations have through model in between)
**Arguments**:
- `prefix` (`str`): table alias
- `field` (`ForeignKeyField`): field with relation
- `is_through` (`bool`): flag if current table is a through table
**Returns**:
`bool`: result of the check
<a name="queryset.reverse_alias_resolver.ReverseAliasResolver._get_previous_excludable"></a>
#### \_get\_previous\_excludable
```python
| _get_previous_excludable(prefix: str, field: "ForeignKeyField", shift: int = 1) -> "Excludable"
```
Returns excludable related to model previous in chain of models.
Used to check if current model should be excluded.
**Arguments**:
- `prefix` (`str`): prefix of a current table
- `field` (`ForeignKeyField`): field with relation
- `shift` (`int`): how many model back to go - for m2m it's 2 due to through models
**Returns**:
`Excludable`: excludable for previous model
<a name="queryset.reverse_alias_resolver.ReverseAliasResolver._create_prefixes_map"></a>
#### \_create\_prefixes\_map
```python
| _create_prefixes_map() -> None
```
Creates a map of alias manager aliases keys to relation strings.
I.e in alias manager you can have alias user_roles: xas12ad
This method will create entry user_roles: roles, where roles is a name of
relation on user model.
Will also keep the relation field in separate dictionary so we can later
extract field names and owner models.
<a name="queryset.reverse_alias_resolver.ReverseAliasResolver._handle_through_fields_and_prefix"></a>
#### \_handle\_through\_fields\_and\_prefix
```python
| _handle_through_fields_and_prefix(model_cls: Type["Model"], field: "ForeignKeyField", previous_related_str: str, relation: str) -> str
```
Registers through models for m2m relations and switches prefix for
the one linking from through model to target model.
For other relations returns current model name + relation name as prefix.
Nested relations are a chain of relation names with __ in between.
**Arguments**:
- `model_cls` (`Type["Model"]`): model of current relation
- `field` (`ForeignKeyField`): field with relation
- `previous_related_str` (`str`): concatenated chain linked with "__"
- `relation` (`str`): name of the current relation in chain
**Returns**:
`str`: name of prefix to populate

View File

@ -1,213 +0,0 @@
<a name="queryset.utils"></a>
# queryset.utils
<a name="queryset.utils.check_node_not_dict_or_not_last_node"></a>
#### check\_node\_not\_dict\_or\_not\_last\_node
```python
check_node_not_dict_or_not_last_node(part: str, is_last: bool, current_level: Any) -> bool
```
Checks if given name is not present in the current level of the structure.
Checks if given name is not the last name in the split list of parts.
Checks if the given name in current level is not a dictionary.
All those checks verify if there is a need for deeper traversal.
**Arguments**:
- `part` (`str`):
- `parts` (`List[str]`):
- `current_level` (`Any`): current level of the traversed structure
**Returns**:
`bool`: result of the check
<a name="queryset.utils.translate_list_to_dict"></a>
#### translate\_list\_to\_dict
```python
translate_list_to_dict(list_to_trans: Union[List, Set], is_order: bool = False) -> Dict
```
Splits the list of strings by '__' and converts them to dictionary with nested
models grouped by parent model. That way each model appears only once in the whole
dictionary and children are grouped under parent name.
Default required key ise Ellipsis like in pydantic.
**Arguments**:
default value with sort order.
- `list_to_trans` (`set`): input list
- `is_order` (`bool`): flag if change affects order_by clauses are they require special
**Returns**:
`Dict`: converted to dictionary input list
<a name="queryset.utils.convert_set_to_required_dict"></a>
#### convert\_set\_to\_required\_dict
```python
convert_set_to_required_dict(set_to_convert: set) -> Dict
```
Converts set to dictionary of required keys.
Required key is Ellipsis.
**Arguments**:
- `set_to_convert` (`set`): set to convert to dict
**Returns**:
`Dict`: set converted to dict of ellipsis
<a name="queryset.utils.update"></a>
#### update
```python
update(current_dict: Any, updating_dict: Any) -> Dict
```
Update one dict with another but with regard for nested keys.
That way nested sets are unionised, dicts updated and
only other values are overwritten.
**Arguments**:
- `current_dict` (`Dict[str, ellipsis]`): dict to update
- `updating_dict` (`Dict`): dict with values to update
**Returns**:
`Dict`: combination of both dicts
<a name="queryset.utils.subtract_dict"></a>
#### subtract\_dict
```python
subtract_dict(current_dict: Any, updating_dict: Any) -> Dict
```
Update one dict with another but with regard for nested keys.
That way nested sets are unionised, dicts updated and
only other values are overwritten.
**Arguments**:
- `current_dict` (`Dict[str, ellipsis]`): dict to update
- `updating_dict` (`Dict`): dict with values to update
**Returns**:
`Dict`: combination of both dicts
<a name="queryset.utils.update_dict_from_list"></a>
#### update\_dict\_from\_list
```python
update_dict_from_list(curr_dict: Dict, list_to_update: Union[List, Set]) -> Dict
```
Converts the list into dictionary and later performs special update, where
nested keys that are sets or dicts are combined and not overwritten.
**Arguments**:
- `curr_dict` (`Dict`): dict to update
- `list_to_update` (`List[str]`): list with values to update the dict
**Returns**:
`Dict`: updated dict
<a name="queryset.utils.extract_nested_models"></a>
#### extract\_nested\_models
```python
extract_nested_models(model: "Model", model_type: Type["Model"], select_dict: Dict, extracted: Dict) -> None
```
Iterates over model relations and extracts all nested models from select_dict and
puts them in corresponding list under relation name in extracted dict.keys
Basically flattens all relation to dictionary of all related models, that can be
used on several models and extract all of their children into dictionary of lists
witch children models.
Goes also into nested relations if needed (specified in select_dict).
**Arguments**:
- `model` (`Model`): parent Model
- `model_type` (`Type[Model]`): parent model class
- `select_dict` (`Dict`): dictionary of related models from select_related
- `extracted` (`Dict`): dictionary with already extracted models
<a name="queryset.utils.extract_models_to_dict_of_lists"></a>
#### extract\_models\_to\_dict\_of\_lists
```python
extract_models_to_dict_of_lists(model_type: Type["Model"], models: Sequence["Model"], select_dict: Dict, extracted: Dict = None) -> Dict
```
Receives a list of models and extracts all of the children and their children
into dictionary of lists with children models, flattening the structure to one dict
with all children models under their relation keys.
**Arguments**:
- `model_type` (`Type[Model]`): parent model class
- `models` (`List[Model]`): list of models from which related models should be extracted.
- `select_dict` (`Dict`): dictionary of related models from select_related
- `extracted` (`Dict`): dictionary with already extracted models
**Returns**:
`Dict`: dictionary of lists f related models
<a name="queryset.utils.get_relationship_alias_model_and_str"></a>
#### get\_relationship\_alias\_model\_and\_str
```python
get_relationship_alias_model_and_str(source_model: Type["Model"], related_parts: List) -> Tuple[str, Type["Model"], str, bool]
```
Walks the relation to retrieve the actual model on which the clause should be
constructed, extracts alias based on last relation leading to target model.
**Arguments**:
- `related_parts` (`Union[List, List[str]]`): list of related names extracted from string
- `source_model` (`Type[Model]`): model from which relation starts
**Returns**:
`Tuple[str, Type["Model"], str]`: table prefix, target model and relation string
<a name="queryset.utils._process_through_field"></a>
#### \_process\_through\_field
```python
_process_through_field(related_parts: List, relation: Optional[str], related_field: "BaseField", previous_model: Type["Model"], previous_models: List[Type["Model"]]) -> Tuple[Type["Model"], Optional[str], bool]
```
Helper processing through models as they need to be treated differently.
**Arguments**:
- `related_parts` (`List[str]`): split relation string
- `relation` (`str`): relation name
- `related_field` (`"ForeignKeyField"`): field with relation declaration
- `previous_model` (`Type["Model"]`): model from which relation is coming
- `previous_models` (`List[Type["Model"]]`): list of already visited models in relation chain
**Returns**:
`Tuple[Type["Model"], str, bool]`: previous_model, relation, is_through

View File

@ -1,171 +0,0 @@
<a name="relations.alias_manager"></a>
# relations.alias\_manager
<a name="relations.alias_manager.get_table_alias"></a>
#### get\_table\_alias
```python
get_table_alias() -> str
```
Creates a random string that is used to alias tables in joins.
It's necessary that each relation has it's own aliases cause you can link
to the same target tables from multiple fields on one model as well as from
multiple different models in one join.
**Returns**:
`str`: randomly generated alias
<a name="relations.alias_manager.AliasManager"></a>
## AliasManager Objects
```python
class AliasManager()
```
Keep all aliases of relations between different tables.
One global instance is shared between all models.
<a name="relations.alias_manager.AliasManager.reversed_aliases"></a>
#### reversed\_aliases
```python
| @property
| reversed_aliases() -> Dict
```
Returns swapped key-value pairs from aliases where alias is the key.
**Returns**:
`Dict`: dictionary of prefix to relation
<a name="relations.alias_manager.AliasManager.prefixed_columns"></a>
#### prefixed\_columns
```python
| @staticmethod
| prefixed_columns(alias: str, table: sqlalchemy.Table, fields: List = None) -> List[text]
```
Creates a list of aliases sqlalchemy text clauses from
string alias and sqlalchemy.Table.
Optional list of fields to include can be passed to extract only those columns.
List has to have sqlalchemy names of columns (ormar aliases) not the ormar ones.
**Arguments**:
- `alias` (`str`): alias of given table
- `table` (`sqlalchemy.Table`): table from which fields should be aliased
- `fields` (`Optional[List[str]]`): fields to include
**Returns**:
`List[text]`: list of sqlalchemy text clauses with "column name as aliased name"
<a name="relations.alias_manager.AliasManager.prefixed_table_name"></a>
#### prefixed\_table\_name
```python
| @staticmethod
| prefixed_table_name(alias: str, table: sqlalchemy.Table) -> text
```
Creates text clause with table name with aliased name.
**Arguments**:
- `alias` (`str`): alias of given table
- `table` (`sqlalchemy.Table`): table
**Returns**:
`sqlalchemy text clause`: sqlalchemy text clause as "table_name aliased_name"
<a name="relations.alias_manager.AliasManager.add_relation_type"></a>
#### add\_relation\_type
```python
| add_relation_type(source_model: Type["Model"], relation_name: str, reverse_name: str = None) -> None
```
Registers the relations defined in ormar models.
Given the relation it registers also the reverse side of this relation.
Used by both ForeignKey and ManyToMany relations.
Each relation is registered as Model name and relation name.
Each alias registered has to be unique.
Aliases are used to construct joins to assure proper links between tables.
That way you can link to the same target tables from multiple fields
on one model as well as from multiple different models in one join.
**Arguments**:
- `source_model` (`source Model`): model with relation defined
- `relation_name` (`str`): name of the relation to define
- `reverse_name` (`Optional[str]`): name of related_name fo given relation for m2m relations
**Returns**:
`None`: none
<a name="relations.alias_manager.AliasManager.add_alias"></a>
#### add\_alias
```python
| add_alias(alias_key: str) -> str
```
Adds alias to the dictionary of aliases under given key.
**Arguments**:
- `alias_key` (`str`): key of relation to generate alias for
**Returns**:
`str`: generated alias
<a name="relations.alias_manager.AliasManager.resolve_relation_alias"></a>
#### resolve\_relation\_alias
```python
| resolve_relation_alias(from_model: Union[Type["Model"], Type["ModelRow"]], relation_name: str) -> str
```
Given model and relation name returns the alias for this relation.
**Arguments**:
- `from_model` (`source Model`): model with relation defined
- `relation_name` (`str`): name of the relation field
**Returns**:
`str`: alias of the relation
<a name="relations.alias_manager.AliasManager.resolve_relation_alias_after_complex"></a>
#### resolve\_relation\_alias\_after\_complex
```python
| resolve_relation_alias_after_complex(source_model: Union[Type["Model"], Type["ModelRow"]], relation_str: str, relation_field: "ForeignKeyField") -> str
```
Given source model and relation string returns the alias for this complex
relation if it exists, otherwise fallback to normal relation from a relation
field definition.
**Arguments**:
- `relation_field` (`"ForeignKeyField"`): field with direct relation definition
- `source_model` (`source Model`): model with query starts
- `relation_str` (`str`): string with relation joins defined
**Returns**:
`str`: alias of the relation

View File

@ -1,783 +0,0 @@
<a name="relations.querysetproxy"></a>
# relations.querysetproxy
<a name="relations.querysetproxy.QuerysetProxy"></a>
## QuerysetProxy Objects
```python
class QuerysetProxy(Generic[T])
```
Exposes QuerySet methods on relations, but also handles creating and removing
of through Models for m2m relations.
<a name="relations.querysetproxy.QuerysetProxy.queryset"></a>
#### queryset
```python
| @property
| queryset() -> "QuerySet[T]"
```
Returns queryset if it's set, AttributeError otherwise.
**Returns**:
`QuerySet`: QuerySet
<a name="relations.querysetproxy.QuerysetProxy.queryset"></a>
#### queryset
```python
| @queryset.setter
| queryset(value: "QuerySet") -> None
```
Set's the queryset. Initialized in RelationProxy.
**Arguments**:
- `value` (`QuerySet`): QuerySet
<a name="relations.querysetproxy.QuerysetProxy._assign_child_to_parent"></a>
#### \_assign\_child\_to\_parent
```python
| _assign_child_to_parent(child: Optional["T"]) -> None
```
Registers child in parents RelationManager.
**Arguments**:
- `child` (`Model`): child to register on parent side.
<a name="relations.querysetproxy.QuerysetProxy._register_related"></a>
#### \_register\_related
```python
| _register_related(child: Union["T", Sequence[Optional["T"]]]) -> None
```
Registers child/ children in parents RelationManager.
**Arguments**:
- `child` (`Union[Model,List[Model]]`): child or list of children models to register.
<a name="relations.querysetproxy.QuerysetProxy._clean_items_on_load"></a>
#### \_clean\_items\_on\_load
```python
| _clean_items_on_load() -> None
```
Cleans the current list of the related models.
<a name="relations.querysetproxy.QuerysetProxy.create_through_instance"></a>
#### create\_through\_instance
```python
| async create_through_instance(child: "T", **kwargs: Any) -> None
```
Crete a through model instance in the database for m2m relations.
**Arguments**:
- `kwargs` (`Any`): dict of additional keyword arguments for through instance
- `child` (`Model`): child model instance
<a name="relations.querysetproxy.QuerysetProxy.update_through_instance"></a>
#### update\_through\_instance
```python
| async update_through_instance(child: "T", **kwargs: Any) -> None
```
Updates a through model instance in the database for m2m relations.
**Arguments**:
- `kwargs` (`Any`): dict of additional keyword arguments for through instance
- `child` (`Model`): child model instance
<a name="relations.querysetproxy.QuerysetProxy.upsert_through_instance"></a>
#### upsert\_through\_instance
```python
| async upsert_through_instance(child: "T", **kwargs: Any) -> None
```
Updates a through model instance in the database for m2m relations if
it already exists, else creates one.
**Arguments**:
- `kwargs` (`Any`): dict of additional keyword arguments for through instance
- `child` (`Model`): child model instance
<a name="relations.querysetproxy.QuerysetProxy.delete_through_instance"></a>
#### delete\_through\_instance
```python
| async delete_through_instance(child: "T") -> None
```
Removes through model instance from the database for m2m relations.
**Arguments**:
- `child` (`Model`): child model instance
<a name="relations.querysetproxy.QuerysetProxy.exists"></a>
#### exists
```python
| async exists() -> bool
```
Returns a bool value to confirm if there are rows matching the given criteria
(applied with `filter` and `exclude` if set).
Actual call delegated to QuerySet.
**Returns**:
`bool`: result of the check
<a name="relations.querysetproxy.QuerysetProxy.count"></a>
#### count
```python
| async count(distinct: bool = True) -> int
```
Returns number of rows matching the given criteria
(applied with `filter` and `exclude` if set before).
If `distinct` is `True` (the default), this will return the number of primary rows selected. If `False`,
the count will be the total number of rows returned
(including extra rows for `one-to-many` or `many-to-many` left `select_related` table joins).
`False` is the legacy (buggy) behavior for workflows that depend on it.
Actual call delegated to QuerySet.
**Arguments**:
- `distinct` (`bool`): flag if the primary table rows should be distinct or not
**Returns**:
`int`: number of rows
<a name="relations.querysetproxy.QuerysetProxy.max"></a>
#### max
```python
| async max(columns: Union[str, List[str]]) -> Any
```
Returns max value of columns for rows matching the given criteria
(applied with `filter` and `exclude` if set before).
**Returns**:
`Any`: max value of column(s)
<a name="relations.querysetproxy.QuerysetProxy.min"></a>
#### min
```python
| async min(columns: Union[str, List[str]]) -> Any
```
Returns min value of columns for rows matching the given criteria
(applied with `filter` and `exclude` if set before).
**Returns**:
`Any`: min value of column(s)
<a name="relations.querysetproxy.QuerysetProxy.sum"></a>
#### sum
```python
| async sum(columns: Union[str, List[str]]) -> Any
```
Returns sum value of columns for rows matching the given criteria
(applied with `filter` and `exclude` if set before).
**Returns**:
`int`: sum value of columns
<a name="relations.querysetproxy.QuerysetProxy.avg"></a>
#### avg
```python
| async avg(columns: Union[str, List[str]]) -> Any
```
Returns avg value of columns for rows matching the given criteria
(applied with `filter` and `exclude` if set before).
**Returns**:
`Union[int, float, List]`: avg value of columns
<a name="relations.querysetproxy.QuerysetProxy.clear"></a>
#### clear
```python
| async clear(keep_reversed: bool = True) -> int
```
Removes all related models from given relation.
Removes all through models for m2m relation.
For reverse FK relations keep_reversed flag marks if the reversed models
should be kept or deleted from the database too (False means that models
will be deleted, and not only removed from relation).
**Arguments**:
or not, keep_reversed=False deletes them from database.
- `keep_reversed` (`bool`): flag if reverse models in reverse FK should be deleted
**Returns**:
`int`: number of deleted models
<a name="relations.querysetproxy.QuerysetProxy.first"></a>
#### first
```python
| async first(*args: Any, **kwargs: Any) -> "T"
```
Gets the first row from the db ordered by primary key column ascending.
Actual call delegated to QuerySet.
Passing args and/or kwargs is a shortcut and equals to calling
`filter(*args, **kwargs).first()`.
List of related models is cleared before the call.
**Arguments**:
- `kwargs`:
**Returns**:
`_asyncio.Future`:
<a name="relations.querysetproxy.QuerysetProxy.get_or_none"></a>
#### get\_or\_none
```python
| async get_or_none(*args: Any, **kwargs: Any) -> Optional["T"]
```
Get's the first row from the db meeting the criteria set by kwargs.
If no criteria set it will return the last row in db sorted by pk.
Passing args and/or kwargs is a shortcut and equals to calling
`filter(*args, **kwargs).get_or_none()`.
If not match is found None will be returned.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: returned model
<a name="relations.querysetproxy.QuerysetProxy.get"></a>
#### get
```python
| async get(*args: Any, **kwargs: Any) -> "T"
```
Get's the first row from the db meeting the criteria set by kwargs.
If no criteria set it will return the last row in db sorted by pk.
Passing args and/or kwargs is a shortcut and equals to calling
`filter(*args, **kwargs).get()`.
Actual call delegated to QuerySet.
List of related models is cleared before the call.
**Raises**:
- `NoMatch`: if no rows are returned
- `MultipleMatches`: if more than 1 row is returned.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: returned model
<a name="relations.querysetproxy.QuerysetProxy.all"></a>
#### all
```python
| async all(*args: Any, **kwargs: Any) -> List["T"]
```
Returns all rows from a database for given model for set filter options.
Passing args and/or kwargs is a shortcut and equals to calling
`filter(*args, **kwargs).all()`.
If there are no rows meeting the criteria an empty list is returned.
Actual call delegated to QuerySet.
List of related models is cleared before the call.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`List[Model]`: list of returned models
<a name="relations.querysetproxy.QuerysetProxy.create"></a>
#### create
```python
| async create(**kwargs: Any) -> "T"
```
Creates the model instance, saves it in a database and returns the updates model
(with pk populated if not passed and autoincrement is set).
The allowed kwargs are `Model` fields names and proper value types.
For m2m relation the through model is created automatically.
Actual call delegated to QuerySet.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: created model
<a name="relations.querysetproxy.QuerysetProxy.update"></a>
#### update
```python
| async update(each: bool = False, **kwargs: Any) -> int
```
Updates the model table after applying the filters from kwargs.
You have to either pass a filter to narrow down a query or explicitly pass
each=True flag to affect whole table.
**Arguments**:
- `each` (`bool`): flag if whole table should be affected if no filter is passed
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`int`: number of updated rows
<a name="relations.querysetproxy.QuerysetProxy.get_or_create"></a>
#### get\_or\_create
```python
| async get_or_create(_defaults: Optional[Dict[str, Any]] = None, *args: Any, **kwargs: Any) -> Tuple["T", bool]
```
Combination of create and get methods.
Tries to get a row meeting the criteria fro kwargs
and if `NoMatch` exception is raised
it creates a new one with given kwargs.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: returned or created Model
<a name="relations.querysetproxy.QuerysetProxy.update_or_create"></a>
#### update\_or\_create
```python
| async update_or_create(**kwargs: Any) -> "T"
```
Updates the model, or in case there is no match in database creates a new one.
Actual call delegated to QuerySet.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`Model`: updated or created model
<a name="relations.querysetproxy.QuerysetProxy.filter"></a>
#### filter
```python
| filter(*args: Any, **kwargs: Any) -> "QuerysetProxy[T]"
```
Allows you to filter by any `Model` attribute/field
as well as to fetch instances, with a filter across an FK relationship.
You can use special filter suffix to change the filter operands:
* exact - like `album__name__exact='Malibu'` (exact match)
* iexact - like `album__name__iexact='malibu'` (exact match case insensitive)
* contains - like `album__name__contains='Mal'` (sql like)
* icontains - like `album__name__icontains='mal'` (sql like case insensitive)
* in - like `album__name__in=['Malibu', 'Barclay']` (sql in)
* isnull - like `album__name__isnull=True` (sql is null)
(isnotnull `album__name__isnull=False` (sql is not null))
* gt - like `position__gt=3` (sql >)
* gte - like `position__gte=3` (sql >=)
* lt - like `position__lt=3` (sql <)
* lte - like `position__lte=3` (sql <=)
* startswith - like `album__name__startswith='Mal'` (exact start match)
* istartswith - like `album__name__istartswith='mal'` (case insensitive)
* endswith - like `album__name__endswith='ibu'` (exact end match)
* iendswith - like `album__name__iendswith='IBU'` (case insensitive)
Actual call delegated to QuerySet.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`QuerysetProxy`: filtered QuerysetProxy
<a name="relations.querysetproxy.QuerysetProxy.exclude"></a>
#### exclude
```python
| exclude(*args: Any, **kwargs: Any) -> "QuerysetProxy[T]"
```
Works exactly the same as filter and all modifiers (suffixes) are the same,
but returns a *not* condition.
So if you use `filter(name='John')` which is `where name = 'John'` in SQL,
the `exclude(name='John')` equals to `where name <> 'John'`
Note that all conditions are joined so if you pass multiple values it
becomes a union of conditions.
`exclude(name='John', age>=35)` will become
`where not (name='John' and age>=35)`
Actual call delegated to QuerySet.
**Arguments**:
- `kwargs` (`Any`): fields names and proper value types
**Returns**:
`QuerysetProxy`: filtered QuerysetProxy
<a name="relations.querysetproxy.QuerysetProxy.select_all"></a>
#### select\_all
```python
| select_all(follow: bool = False) -> "QuerysetProxy[T]"
```
By default adds only directly related models.
If follow=True is set it adds also related models of related models.
To not get stuck in an infinite loop as related models also keep a relation
to parent model visited models set is kept.
That way already visited models that are nested are loaded, but the load do not
follow them inside. So Model A -> Model B -> Model C -> Model A -> Model X
will load second Model A but will never follow into Model X.
Nested relations of those kind need to be loaded manually.
**Arguments**:
by default only directly related models are saved
with follow=True also related models of related models are saved
- `follow` (`bool`): flag to trigger deep save -
**Returns**:
`Model`: reloaded Model
<a name="relations.querysetproxy.QuerysetProxy.select_related"></a>
#### select\_related
```python
| select_related(related: Union[List, str]) -> "QuerysetProxy[T]"
```
Allows to prefetch related models during the same query.
**With `select_related` always only one query is run against the database**,
meaning that one (sometimes complicated) join is generated and later nested
models are processed in python.
To fetch related model use `ForeignKey` names.
To chain related `Models` relation use double underscores between names.
Actual call delegated to QuerySet.
**Arguments**:
- `related` (`Union[List, str]`): list of relation field names, can be linked by '__' to nest
**Returns**:
`QuerysetProxy`: QuerysetProxy
<a name="relations.querysetproxy.QuerysetProxy.prefetch_related"></a>
#### prefetch\_related
```python
| prefetch_related(related: Union[List, str]) -> "QuerysetProxy[T]"
```
Allows to prefetch related models during query - but opposite to
`select_related` each subsequent model is fetched in a separate database query.
**With `prefetch_related` always one query per Model is run against the
database**, meaning that you will have multiple queries executed one
after another.
To fetch related model use `ForeignKey` names.
To chain related `Models` relation use double underscores between names.
Actual call delegated to QuerySet.
**Arguments**:
- `related` (`Union[List, str]`): list of relation field names, can be linked by '__' to nest
**Returns**:
`QuerysetProxy`: QuerysetProxy
<a name="relations.querysetproxy.QuerysetProxy.paginate"></a>
#### paginate
```python
| paginate(page: int, page_size: int = 20) -> "QuerysetProxy[T]"
```
You can paginate the result which is a combination of offset and limit clauses.
Limit is set to page size and offset is set to (page-1) * page_size.
Actual call delegated to QuerySet.
**Arguments**:
- `page_size` (`int`): numbers of items per page
- `page` (`int`): page number
**Returns**:
`QuerySet`: QuerySet
<a name="relations.querysetproxy.QuerysetProxy.limit"></a>
#### limit
```python
| limit(limit_count: int) -> "QuerysetProxy[T]"
```
You can limit the results to desired number of parent models.
Actual call delegated to QuerySet.
**Arguments**:
- `limit_count` (`int`): number of models to limit
**Returns**:
`QuerysetProxy`: QuerysetProxy
<a name="relations.querysetproxy.QuerysetProxy.offset"></a>
#### offset
```python
| offset(offset: int) -> "QuerysetProxy[T]"
```
You can also offset the results by desired number of main models.
Actual call delegated to QuerySet.
**Arguments**:
- `offset` (`int`): numbers of models to offset
**Returns**:
`QuerysetProxy`: QuerysetProxy
<a name="relations.querysetproxy.QuerysetProxy.fields"></a>
#### fields
```python
| fields(columns: Union[List, str, Set, Dict]) -> "QuerysetProxy[T]"
```
With `fields()` you can select subset of model columns to limit the data load.
Note that `fields()` and `exclude_fields()` works both for main models
(on normal queries like `get`, `all` etc.)
as well as `select_related` and `prefetch_related`
models (with nested notation).
You can select specified fields by passing a `str, List[str], Set[str] or
dict` with nested definition.
To include related models use notation
`{related_name}__{column}[__{optional_next} etc.]`.
`fields()` can be called several times, building up the columns to select.
If you include related models into `select_related()` call but you won't specify
columns for those models in fields - implies a list of all fields for
those nested models.
Mandatory fields cannot be excluded as it will raise `ValidationError`,
to exclude a field it has to be nullable.
Pk column cannot be excluded - it's always auto added even if
not explicitly included.
You can also pass fields to include as dictionary or set.
To mark a field as included in a dictionary use it's name as key
and ellipsis as value.
To traverse nested models use nested dictionaries.
To include fields at last level instead of nested dictionary a set can be used.
To include whole nested model specify model related field name and ellipsis.
Actual call delegated to QuerySet.
**Arguments**:
- `columns` (`Union[List, str, Set, Dict]`): columns to include
**Returns**:
`QuerysetProxy`: QuerysetProxy
<a name="relations.querysetproxy.QuerysetProxy.exclude_fields"></a>
#### exclude\_fields
```python
| exclude_fields(columns: Union[List, str, Set, Dict]) -> "QuerysetProxy[T]"
```
With `exclude_fields()` you can select subset of model columns that will
be excluded to limit the data load.
It's the opposite of `fields()` method so check documentation above
to see what options are available.
Especially check above how you can pass also nested dictionaries
and sets as a mask to exclude fields from whole hierarchy.
Note that `fields()` and `exclude_fields()` works both for main models
(on normal queries like `get`, `all` etc.)
as well as `select_related` and `prefetch_related` models
(with nested notation).
Mandatory fields cannot be excluded as it will raise `ValidationError`,
to exclude a field it has to be nullable.
Pk column cannot be excluded - it's always auto added even
if explicitly excluded.
Actual call delegated to QuerySet.
**Arguments**:
- `columns` (`Union[List, str, Set, Dict]`): columns to exclude
**Returns**:
`QuerysetProxy`: QuerysetProxy
<a name="relations.querysetproxy.QuerysetProxy.order_by"></a>
#### order\_by
```python
| order_by(columns: Union[List, str, "OrderAction"]) -> "QuerysetProxy[T]"
```
With `order_by()` you can order the results from database based on your
choice of fields.
You can provide a string with field name or list of strings with fields names.
Ordering in sql will be applied in order of names you provide in order_by.
By default if you do not provide ordering `ormar` explicitly orders by
all primary keys
If you are sorting by nested models that causes that the result rows are
unsorted by the main model `ormar` will combine those children rows into
one main model.
The main model will never duplicate in the result
To order by main model field just provide a field name
To sort on nested models separate field names with dunder '__'.
You can sort this way across all relation types -> `ForeignKey`,
reverse virtual FK and `ManyToMany` fields.
To sort in descending order provide a hyphen in front of the field name
Actual call delegated to QuerySet.
**Arguments**:
- `columns` (`Union[List, str]`): columns by which models should be sorted
**Returns**:
`QuerysetProxy`: QuerysetProxy

View File

@ -1,150 +0,0 @@
<a name="relations.relation_manager"></a>
# relations.relation\_manager
<a name="relations.relation_manager.RelationsManager"></a>
## RelationsManager Objects
```python
class RelationsManager()
```
Manages relations on a Model, each Model has it's own instance.
<a name="relations.relation_manager.RelationsManager.__contains__"></a>
#### \_\_contains\_\_
```python
| __contains__(item: str) -> bool
```
Checks if relation with given name is already registered.
**Arguments**:
- `item` (`str`): name of attribute
**Returns**:
`bool`: result of the check
<a name="relations.relation_manager.RelationsManager.get"></a>
#### get
```python
| get(name: str) -> Optional[Union["Model", Sequence["Model"]]]
```
Returns the related model/models if relation is set.
Actual call is delegated to Relation instance registered under relation name.
**Arguments**:
- `name` (`str`): name of the relation
**Returns**:
`Optional[Union[Model, List[Model]]`: related model or list of related models if set
<a name="relations.relation_manager.RelationsManager.add"></a>
#### add
```python
| @staticmethod
| add(parent: "Model", child: "Model", field: "ForeignKeyField") -> None
```
Adds relation on both sides -> meaning on both child and parent models.
One side of the relation is always weakref proxy to avoid circular refs.
Based on the side from which relation is added and relation name actual names
of parent and child relations are established. The related models are registered
on both ends.
**Arguments**:
- `parent` (`Model`): parent model on which relation should be registered
- `child` (`Model`): child model to register
- `field` (`ForeignKeyField`): field with relation definition
<a name="relations.relation_manager.RelationsManager.remove"></a>
#### remove
```python
| remove(name: str, child: Union["NewBaseModel", Type["NewBaseModel"]]) -> None
```
Removes given child from relation with given name.
Since you can have many relations between two models you need to pass a name
of relation from which you want to remove the child.
**Arguments**:
- `name` (`str`): name of the relation
- `child` (`Union[Model, Type[Model]]`): child to remove from relation
<a name="relations.relation_manager.RelationsManager.remove_parent"></a>
#### remove\_parent
```python
| @staticmethod
| remove_parent(item: Union["NewBaseModel", Type["NewBaseModel"]], parent: "Model", name: str) -> None
```
Removes given parent from relation with given name.
Since you can have many relations between two models you need to pass a name
of relation from which you want to remove the parent.
**Arguments**:
- `item` (`Union[Model, Type[Model]]`): model with parent registered
- `parent` (`Model`): parent Model
- `name` (`str`): name of the relation
<a name="relations.relation_manager.RelationsManager._get"></a>
#### \_get
```python
| _get(name: str) -> Optional[Relation]
```
Returns the actual relation and not the related model(s).
**Arguments**:
- `name` (`str`): name of the relation
**Returns**:
`ormar.relations.relation.Relation`: Relation instance
<a name="relations.relation_manager.RelationsManager._get_relation_type"></a>
#### \_get\_relation\_type
```python
| _get_relation_type(field: "BaseField") -> RelationType
```
Returns type of the relation declared on a field.
**Arguments**:
- `field` (`BaseField`): field with relation declaration
**Returns**:
`RelationType`: type of the relation defined on field
<a name="relations.relation_manager.RelationsManager._add_relation"></a>
#### \_add\_relation
```python
| _add_relation(field: "BaseField") -> None
```
Registers relation in the manager.
Adds Relation instance under field.name.
**Arguments**:
- `field` (`BaseField`): field with relation declaration

View File

@ -1,145 +0,0 @@
<a name="relations.relation_proxy"></a>
# relations.relation\_proxy
<a name="relations.relation_proxy.RelationProxy"></a>
## RelationProxy Objects
```python
class RelationProxy(Generic[T], list)
```
Proxy of the Relation that is a list with special methods.
<a name="relations.relation_proxy.RelationProxy.related_field_name"></a>
#### related\_field\_name
```python
| @property
| related_field_name() -> str
```
On first access calculates the name of the related field, later stored in
_related_field_name property.
**Returns**:
`str`: name of the related field
<a name="relations.relation_proxy.RelationProxy.__getattribute__"></a>
#### \_\_getattribute\_\_
```python
| __getattribute__(item: str) -> Any
```
Since some QuerySetProxy methods overwrite builtin list methods we
catch calls to them and delegate it to QuerySetProxy instead.
**Arguments**:
- `item` (`str`): name of attribute
**Returns**:
`Any`: value of attribute
<a name="relations.relation_proxy.RelationProxy.__getattr__"></a>
#### \_\_getattr\_\_
```python
| __getattr__(item: str) -> Any
```
Delegates calls for non existing attributes to QuerySetProxy.
**Arguments**:
- `item` (`str`): name of attribute/method
**Returns**:
`method`: method from QuerySetProxy if exists
<a name="relations.relation_proxy.RelationProxy._initialize_queryset"></a>
#### \_initialize\_queryset
```python
| _initialize_queryset() -> None
```
Initializes the QuerySetProxy if not yet initialized.
<a name="relations.relation_proxy.RelationProxy._check_if_queryset_is_initialized"></a>
#### \_check\_if\_queryset\_is\_initialized
```python
| _check_if_queryset_is_initialized() -> bool
```
Checks if the QuerySetProxy is already set and ready.
**Returns**:
`bool`: result of the check
<a name="relations.relation_proxy.RelationProxy._check_if_model_saved"></a>
#### \_check\_if\_model\_saved
```python
| _check_if_model_saved() -> None
```
Verifies if the parent model of the relation has been already saved.
Otherwise QuerySetProxy cannot filter by parent primary key.
<a name="relations.relation_proxy.RelationProxy._set_queryset"></a>
#### \_set\_queryset
```python
| _set_queryset() -> "QuerySet[T]"
```
Creates new QuerySet with relation model and pre filters it with currents
parent model primary key, so all queries by definition are already related
to the parent model only, without need for user to filter them.
**Returns**:
`QuerySet`: initialized QuerySet
<a name="relations.relation_proxy.RelationProxy.remove"></a>
#### remove
```python
| async remove(item: "T", keep_reversed: bool = True) -> None
```
Removes the related from relation with parent.
Through models are automatically deleted for m2m relations.
For reverse FK relations keep_reversed flag marks if the reversed models
should be kept or deleted from the database too (False means that models
will be deleted, and not only removed from relation).
**Arguments**:
- `item` (`Model`): child to remove from relation
- `keep_reversed` (`bool`): flag if the reversed model should be kept or deleted too
<a name="relations.relation_proxy.RelationProxy.add"></a>
#### add
```python
| async add(item: "T", **kwargs: Any) -> None
```
Adds child model to relation.
For ManyToMany relations through instance is automatically created.
**Arguments**:
- `kwargs` (`Any`): dict of additional keyword arguments for through instance
- `item` (`Model`): child to add to relation

View File

@ -1,112 +0,0 @@
<a name="relations.relation"></a>
# relations.relation
<a name="relations.relation.RelationType"></a>
## RelationType Objects
```python
class RelationType(Enum)
```
Different types of relations supported by ormar:
* ForeignKey = PRIMARY
* reverse ForeignKey = REVERSE
* ManyToMany = MULTIPLE
<a name="relations.relation.Relation"></a>
## Relation Objects
```python
class Relation(Generic[T])
```
Keeps related Models and handles adding/removing of the children.
<a name="relations.relation.Relation.__init__"></a>
#### \_\_init\_\_
```python
| __init__(manager: "RelationsManager", type_: RelationType, field_name: str, to: Type["T"], through: Type["Model"] = None) -> None
```
Initialize the Relation and keep the related models either as instances of
passed Model, or as a RelationProxy which is basically a list of models with
some special behavior, as it exposes QuerySetProxy and allows querying the
related models already pre filtered by parent model.
**Arguments**:
- `manager` (`RelationsManager`): reference to relation manager
- `type_` (`RelationType`): type of the relation
- `field_name` (`str`): name of the relation field
- `to` (`Type[Model]`): model to which relation leads to
- `through` (`Type[Model]`): model through which relation goes for m2m relations
<a name="relations.relation.Relation._clean_related"></a>
#### \_clean\_related
```python
| _clean_related() -> None
```
Removes dead weakrefs from RelationProxy.
<a name="relations.relation.Relation._find_existing"></a>
#### \_find\_existing
```python
| _find_existing(child: Union["NewBaseModel", Type["NewBaseModel"]]) -> Optional[int]
```
Find child model in RelationProxy if exists.
**Arguments**:
- `child` (`Model`): child model to find
**Returns**:
`Optional[ind]`: index of child in RelationProxy
<a name="relations.relation.Relation.add"></a>
#### add
```python
| add(child: "Model") -> None
```
Adds child Model to relation, either sets child as related model or adds
it to the list in RelationProxy depending on relation type.
**Arguments**:
- `child` (`Model`): model to add to relation
<a name="relations.relation.Relation.remove"></a>
#### remove
```python
| remove(child: Union["NewBaseModel", Type["NewBaseModel"]]) -> None
```
Removes child Model from relation, either sets None as related model or removes
it from the list in RelationProxy depending on relation type.
**Arguments**:
- `child` (`Model`): model to remove from relation
<a name="relations.relation.Relation.get"></a>
#### get
```python
| get() -> Optional[Union[List["Model"], "Model"]]
```
Return the related model or models from RelationProxy.
**Returns**:
`Optional[Union[List[Model], Model]]`: related model/models if set

View File

@ -1,23 +0,0 @@
<a name="relations.utils"></a>
# relations.utils
<a name="relations.utils.get_relations_sides_and_names"></a>
#### get\_relations\_sides\_and\_names
```python
get_relations_sides_and_names(to_field: ForeignKeyField, parent: "Model", child: "Model") -> Tuple["Model", "Model", str, str]
```
Determines the names of child and parent relations names, as well as
changes one of the sides of the relation into weakref.proxy to model.
**Arguments**:
- `to_field` (`ForeignKeyField`): field with relation definition
- `parent` (`Model`): parent model
- `child` (`Model`): child model
**Returns**:
`Tuple["Model", "Model", str, str]`: parent, child, child_name, to_name

View File

@ -1,202 +0,0 @@
<a name="decorators.signals"></a>
# decorators.signals
<a name="decorators.signals.receiver"></a>
#### receiver
```python
receiver(signal: str, senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for given signal name.
**Arguments**:
that should have the signal receiver registered
- `signal` (`str`): name of the signal to register to
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.post_save"></a>
#### post\_save
```python
post_save(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for post_save signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.post_update"></a>
#### post\_update
```python
post_update(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for post_update signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.post_delete"></a>
#### post\_delete
```python
post_delete(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for post_delete signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.pre_save"></a>
#### pre\_save
```python
pre_save(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for pre_save signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.pre_update"></a>
#### pre\_update
```python
pre_update(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for pre_update signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.pre_delete"></a>
#### pre\_delete
```python
pre_delete(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for pre_delete signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.pre_relation_add"></a>
#### pre\_relation\_add
```python
pre_relation_add(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for pre_relation_add signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.post_relation_add"></a>
#### post\_relation\_add
```python
post_relation_add(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for post_relation_add signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.pre_relation_remove"></a>
#### pre\_relation\_remove
```python
pre_relation_remove(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for pre_relation_remove signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched
<a name="decorators.signals.post_relation_remove"></a>
#### post\_relation\_remove
```python
post_relation_remove(senders: Union[Type["Model"], List[Type["Model"]]]) -> Callable
```
Connect given function to all senders for post_relation_remove signal.
**Arguments**:
that should have the signal receiver registered
- `senders` (`Union[Type["Model"], List[Type["Model"]]]`): one or a list of "Model" classes
**Returns**:
`Callable`: returns the original function untouched

View File

@ -1,106 +0,0 @@
<a name="signals.signal"></a>
# signals.signal
<a name="signals.signal.callable_accepts_kwargs"></a>
#### callable\_accepts\_kwargs
```python
callable_accepts_kwargs(func: Callable) -> bool
```
Checks if function accepts **kwargs.
**Arguments**:
- `func` (`function`): function which signature needs to be checked
**Returns**:
`bool`: result of the check
<a name="signals.signal.make_id"></a>
#### make\_id
```python
make_id(target: Any) -> Union[int, Tuple[int, int]]
```
Creates id of a function or method to be used as key to store signal
**Arguments**:
- `target` (`Any`): target which id we want
**Returns**:
`int`: id of the target
<a name="signals.signal.Signal"></a>
## Signal Objects
```python
class Signal()
```
Signal that notifies all receiver functions.
In ormar used by models to send pre_save, post_save etc. signals.
<a name="signals.signal.Signal.connect"></a>
#### connect
```python
| connect(receiver: Callable) -> None
```
Connects given receiver function to the signal.
**Raises**:
- `SignalDefinitionError`: if receiver is not callable
or not accept **kwargs
**Arguments**:
- `receiver` (`Callable`): receiver function
<a name="signals.signal.Signal.disconnect"></a>
#### disconnect
```python
| disconnect(receiver: Callable) -> bool
```
Removes the receiver function from the signal.
**Arguments**:
- `receiver` (`Callable`): receiver function
**Returns**:
`bool`: flag if receiver was removed
<a name="signals.signal.Signal.send"></a>
#### send
```python
| async send(sender: Type["Model"], **kwargs: Any) -> None
```
Notifies all receiver functions with given kwargs
**Arguments**:
- `sender` (`Type["Model"]`): model that sends the signal
- `kwargs` (`Any`): arguments passed to receivers
<a name="signals.signal.SignalEmitter"></a>
## SignalEmitter Objects
```python
class SignalEmitter()
```
Emitter that registers the signals in internal dictionary.
If signal with given name does not exist it's auto added on access.

32
docs/gen_ref_pages.py Normal file
View File

@ -0,0 +1,32 @@
"""Generate the code reference pages and navigation."""
from pathlib import Path
import mkdocs_gen_files
nav = mkdocs_gen_files.Nav()
for path in sorted(Path("ormar").rglob("*.py")):
module_path = path.relative_to(".").with_suffix("")
doc_path = path.relative_to("ormar").with_suffix(".md")
full_doc_path = Path("api", doc_path)
parts = tuple(module_path.parts)
if parts[-1] == "__init__":
parts = parts[:-1]
doc_path = doc_path.with_name("index.md")
full_doc_path = full_doc_path.with_name("index.md")
elif parts[-1] == "__main__":
continue
nav[parts] = str(doc_path)
with mkdocs_gen_files.open(full_doc_path, "w") as fd:
ident = ".".join(parts)
fd.write(f"::: {ident}")
mkdocs_gen_files.set_edit_path(full_doc_path, path)
with mkdocs_gen_files.open("api/SUMMARY.md", "w") as nav_file:
nav_file.writelines(nav.build_literate_nav())

View File

@ -326,3 +326,5 @@ objects from other side of the relation.
!!!tip !!!tip
To read more about `QuerysetProxy` visit [querysetproxy][querysetproxy] section To read more about `QuerysetProxy` visit [querysetproxy][querysetproxy] section
[querysetproxy]: ../relations/queryset-proxy.md

View File

@ -357,3 +357,5 @@ you to query or create related objects from other side of the relation.
!!!tip !!!tip
To read more about `QuerysetProxy` visit [querysetproxy][querysetproxy] section To read more about `QuerysetProxy` visit [querysetproxy][querysetproxy] section
[querysetproxy]: ../relations/queryset-proxy.md

View File

@ -42,67 +42,10 @@ nav:
- PyCharm plugin: plugin.md - PyCharm plugin: plugin.md
- Contributing: contributing.md - Contributing: contributing.md
- Release Notes: releases.md - Release Notes: releases.md
- Api (BETA): - Api (BETA): api/
- Index: api/index.md
- Models:
- Helpers:
- api/models/descriptors/descriptors.md
- Helpers:
- api/models/helpers/models.md
- api/models/helpers/pydantic.md
- api/models/helpers/relations.md
- api/models/helpers/sqlalchemy.md
- api/models/helpers/validation.md
- api/models/helpers/related-names-validation.md
- Mixins:
- Alias Mixin: api/models/mixins/alias-mixin.md
- Excludable Mixin: api/models/mixins/excludable-mixin.md
- Merge Model Mixin: api/models/mixins/merge-model-mixin.md
- Prefetch Query Mixin: api/models/mixins/prefetch-query-mixin.md
- Relation Mixin: api/models/mixins/relation-mixin.md
- Save Prepare Mixin: api/models/mixins/save-prepare-mixin.md
- api/models/model.md
- Model Row: api/models/model-row.md
- New BaseModel: api/models/new-basemodel.md
- Model Table Proxy: api/models/model-table-proxy.md
- Model Metaclass: api/models/model-metaclass.md
- Excludable Items: api/models/excludable-items.md
- Traversible: api/models/traversible.md
- Fields:
- Base Field: api/fields/base-field.md
- Model Fields: api/fields/model-fields.md
- Foreign Key: api/fields/foreign-key.md
- Many To Many: api/fields/many-to-many.md
- api/fields/decorators.md
- Query Set:
- Query Set: api/query-set/query-set.md
- api/query-set/query.md
- Prefetch Query: api/query-set/prefetch-query.md
- api/query-set/join.md
- api/query-set/clause.md
- Filter Query: api/query-set/filter-query.md
- Order Query: api/query-set/order-query.md
- Limit Query: api/query-set/limit-query.md
- Offset Query: api/query-set/offset-query.md
- Field Accessor: api/query-set/field-accessor.md
- Reverse Alias Resolver: api/query-set/reverse-alias-resolver.md
- api/query-set/utils.md
- Relations:
- Relation Manager: api/relations/relation-manager.md
- api/relations/relation.md
- Relation Proxy: api/relations/relation-proxy.md
- Queryset Proxy: api/relations/queryset-proxy.md
- Alias Manager: api/relations/alias-manager.md
- api/relations/utils.md
- Signals:
- api/signals/signal.md
- api/signals/decorators.md
- Exceptions: api/exceptions.md
repo_name: collerek/ormar repo_name: collerek/ormar
repo_url: https://github.com/collerek/ormar repo_url: https://github.com/collerek/ormar
google_analytics:
- UA-72514911-3
- auto
theme: theme:
name: material name: material
highlightjs: true highlightjs: true
@ -110,6 +53,8 @@ theme:
- python - python
palette: palette:
primary: indigo primary: indigo
analytics:
gtag: G-ZJWZYM5DNM
markdown_extensions: markdown_extensions:
- admonition - admonition
- pymdownx.superfences - pymdownx.superfences
@ -118,8 +63,29 @@ markdown_extensions:
- pymdownx.inlinehilite - pymdownx.inlinehilite
- pymdownx.highlight: - pymdownx.highlight:
linenums: true linenums: true
plugins:
- search
- gen-files:
scripts:
- docs/gen_ref_pages.py
- literate-nav:
nav_file: SUMMARY.md
- section-index
- mkdocstrings:
watch:
- ormar
handlers:
python:
selection:
docstring_style: sphinx
rendering:
show_submodules: no
extra:
analytics:
provider: google
property: UA-72514911-3
extra_javascript: extra_javascript:
- https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.1.1/highlight.min.js - https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.1.1/highlight.min.js
- javascripts/config.js - javascripts/config.js
extra_css: extra_css:
- https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.1.1/styles/default.min.css - https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.1.1/styles/default.min.css

520
poetry.lock generated
View File

@ -67,6 +67,17 @@ python-versions = ">=3.6.1"
[package.extras] [package.extras]
typed = ["typed-ast"] typed = ["typed-ast"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
[[package]] [[package]]
name = "async-timeout" name = "async-timeout"
version = "4.0.2" version = "4.0.2"
@ -158,6 +169,14 @@ d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"] jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"] uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "cached-property"
version = "1.5.2"
description = "A decorator for caching properties in classes."
category = "dev"
optional = false
python-versions = "*"
[[package]] [[package]]
name = "certifi" name = "certifi"
version = "2021.10.8" version = "2021.10.8"
@ -287,44 +306,6 @@ postgresql = ["asyncpg"]
postgresql_aiopg = ["aiopg"] postgresql_aiopg = ["aiopg"]
sqlite = ["aiosqlite"] sqlite = ["aiosqlite"]
[[package]]
name = "databind"
version = "1.5.2"
description = "Databind is a library inspired by jackson-databind to de-/serialize Python dataclasses. The `databind` package will install the full suite of databind packages. Compatible with Python 3.7 and newer."
category = "dev"
optional = false
python-versions = ">=3.7,<4.0"
[package.dependencies]
"databind.core" = ">=1.5.2,<2.0.0"
"databind.json" = ">=1.5.2,<2.0.0"
[[package]]
name = "databind.core"
version = "1.5.2"
description = "Databind is a library inspired by jackson-databind to de-/serialize Python dataclasses. Compatible with Python 3.7 and newer."
category = "dev"
optional = false
python-versions = ">=3.7,<4.0"
[package.dependencies]
Deprecated = ">=1.2.12,<2.0.0"
"nr.util" = ">=0.8.3,<1.0.0"
typing-extensions = ">=3.10.0,<4.0.0"
[[package]]
name = "databind.json"
version = "1.5.2"
description = "De-/serialize Python dataclasses to or from JSON payloads. Compatible with Python 3.7 and newer."
category = "dev"
optional = false
python-versions = ">=3.7,<4.0"
[package.dependencies]
"databind.core" = ">=1.5.2,<2.0.0"
"nr.util" = ">=0.8.3,<1.0.0"
typing-extensions = ">=3.10.0,<4.0.0"
[[package]] [[package]]
name = "dataclasses" name = "dataclasses"
version = "0.6" version = "0.6"
@ -333,20 +314,6 @@ category = "dev"
optional = false optional = false
python-versions = "*" python-versions = "*"
[[package]]
name = "deprecated"
version = "1.2.13"
description = "Python @deprecated decorator to deprecate old python classes, functions or methods."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
wrapt = ">=1.10,<2"
[package.extras]
dev = ["tox", "bump2version (<1)", "sphinx (<2)", "importlib-metadata (<3)", "importlib-resources (<4)", "configparser (<5)", "sphinxcontrib-websupport (<2)", "zipp (<2)", "PyTest (<5)", "PyTest-Cov (<2.6)", "pytest", "pytest-cov"]
[[package]] [[package]]
name = "distlib" name = "distlib"
version = "0.3.4" version = "0.3.4"
@ -355,41 +322,6 @@ category = "dev"
optional = false optional = false
python-versions = "*" python-versions = "*"
[[package]]
name = "docspec"
version = "2.0.1"
description = "Docspec is a JSON object specification for representing API documentation of programming languages."
category = "dev"
optional = false
python-versions = ">=3.7,<4.0"
[package.dependencies]
databind = ">=1.5.0,<2.0.0"
Deprecated = ">=1.2.12,<2.0.0"
[[package]]
name = "docspec-python"
version = "2.0.1"
description = "A parser based on lib2to3 producing docspec data from Python source code."
category = "dev"
optional = false
python-versions = ">=3.7,<4.0"
[package.dependencies]
docspec = ">=2.0.1,<3.0.0"
"nr.util" = ">=0.7.0"
[[package]]
name = "docstring-parser"
version = "0.11"
description = "\"Parse Python docstrings in reST, Google and Numpydoc format\""
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
test = ["pytest", "black"]
[[package]] [[package]]
name = "fastapi" name = "fastapi"
version = "0.75.2" version = "0.75.2"
@ -602,6 +534,20 @@ python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*"
[package.extras] [package.extras]
docs = ["sphinx"] docs = ["sphinx"]
[[package]]
name = "griffe"
version = "0.19.0"
description = "Signatures for entire Python programs. Extract the structure, the frame, the skeleton of your project, to generate API documentation or find breaking changes in your API."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cached_property = {version = "*", markers = "python_version < \"3.8\""}
[package.extras]
async = ["aiofiles (>=0.7,<1.0)"]
[[package]] [[package]]
name = "identify" name = "identify"
version = "2.5.0" version = "2.5.0"
@ -662,7 +608,7 @@ i18n = ["Babel (>=2.7)"]
[[package]] [[package]]
name = "markdown" name = "markdown"
version = "3.3.6" version = "3.3.7"
description = "Python implementation of Markdown." description = "Python implementation of Markdown."
category = "dev" category = "dev"
optional = false optional = false
@ -721,6 +667,40 @@ watchdog = ">=2.0"
[package.extras] [package.extras]
i18n = ["babel (>=2.9.0)"] i18n = ["babel (>=2.9.0)"]
[[package]]
name = "mkdocs-autorefs"
version = "0.4.1"
description = "Automatically link across pages in MkDocs."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
Markdown = ">=3.3"
mkdocs = ">=1.1"
[[package]]
name = "mkdocs-gen-files"
version = "0.3.4"
description = "MkDocs plugin to programmatically generate documentation pages during the build"
category = "dev"
optional = false
python-versions = ">=3.7,<4.0"
[package.dependencies]
mkdocs = ">=1.0.3,<2.0.0"
[[package]]
name = "mkdocs-literate-nav"
version = "0.4.1"
description = "MkDocs plugin to specify the navigation in Markdown instead of YAML"
category = "dev"
optional = false
python-versions = ">=3.6,<4.0"
[package.dependencies]
mkdocs = ">=1.0.3,<2.0.0"
[[package]] [[package]]
name = "mkdocs-material" name = "mkdocs-material"
version = "8.2.13" version = "8.2.13"
@ -745,6 +725,64 @@ category = "dev"
optional = false optional = false
python-versions = ">=3.6" python-versions = ">=3.6"
[[package]]
name = "mkdocs-section-index"
version = "0.3.4"
description = "MkDocs plugin to allow clickable sections that lead to an index page"
category = "dev"
optional = false
python-versions = ">=3.6,<4.0"
[package.dependencies]
mkdocs = ">=1.1,<2.0"
[[package]]
name = "mkdocstrings"
version = "0.18.0"
description = "Automatic documentation from sources, for MkDocs."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
Jinja2 = ">=2.11.1"
Markdown = ">=3.3"
MarkupSafe = ">=1.1"
mkdocs = ">=1.2"
mkdocs-autorefs = ">=0.3.1"
mkdocstrings-python = {version = ">=0.5.2", optional = true, markers = "extra == \"python\""}
mkdocstrings-python-legacy = ">=0.2"
pymdown-extensions = ">=6.3"
[package.extras]
crystal = ["mkdocstrings-crystal (>=0.3.4)"]
python = ["mkdocstrings-python (>=0.5.2)"]
python-legacy = ["mkdocstrings-python-legacy (>=0.2.1)"]
[[package]]
name = "mkdocstrings-python"
version = "0.6.6"
description = "A Python handler for mkdocstrings."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
griffe = ">=0.11.1"
mkdocstrings = ">=0.18"
[[package]]
name = "mkdocstrings-python-legacy"
version = "0.2.2"
description = "A legacy Python handler for mkdocstrings."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
mkdocstrings = ">=0.18"
pytkdocs = ">=0.14"
[[package]] [[package]]
name = "mr-proper" name = "mr-proper"
version = "0.0.7" version = "0.0.7"
@ -801,18 +839,6 @@ category = "dev"
optional = false optional = false
python-versions = "*" python-versions = "*"
[[package]]
name = "nr.util"
version = "0.8.11"
description = "General purpose Python utility library."
category = "dev"
optional = false
python-versions = ">=3.7,<4.0"
[package.dependencies]
deprecated = ">=1.2.0,<2.0.0"
typing-extensions = ">=3.0.0"
[[package]] [[package]]
name = "orjson" name = "orjson"
version = "3.6.8" version = "3.6.8"
@ -842,7 +868,7 @@ python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]] [[package]]
name = "pbr" name = "pbr"
version = "5.8.1" version = "5.9.0"
description = "Python Build Reasonableness" description = "Python Build Reasonableness"
category = "dev" category = "dev"
optional = false optional = false
@ -939,29 +965,6 @@ typing-extensions = ">=3.7.4.3"
dotenv = ["python-dotenv (>=0.10.4)"] dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"] email = ["email-validator (>=1.0.3)"]
[[package]]
name = "pydoc-markdown"
version = "4.6.3"
description = "Create Python API documentation in Markdown format."
category = "dev"
optional = false
python-versions = ">=3.7,<4.0"
[package.dependencies]
click = ">=7.1,<9.0"
databind = ">=1.5.0,<2.0.0"
docspec = ">=2.0.0a1,<3.0.0"
docspec-python = ">=2.0.0a1,<3.0.0"
docstring-parser = ">=0.11,<0.12"
jinja2 = ">=3.0.0,<4.0.0"
"nr.util" = ">=0.7.5,<1.0.0"
PyYAML = ">=5.3,<6.0"
requests = ">=2.23.0,<3.0.0"
tomli = ">=2.0.0,<3.0.0"
tomli_w = ">=1.0.0,<2.0.0"
watchdog = "*"
yapf = ">=0.30.0"
[[package]] [[package]]
name = "pyflakes" name = "pyflakes"
version = "2.3.1" version = "2.3.1"
@ -1074,13 +1077,29 @@ python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies] [package.dependencies]
six = ">=1.5" six = ">=1.5"
[[package]]
name = "pytkdocs"
version = "0.16.1"
description = "Load Python objects documentation."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
astunparse = {version = ">=1.6", markers = "python_version < \"3.9\""}
cached-property = {version = ">=1.5", markers = "python_version < \"3.8\""}
typing-extensions = {version = ">=3.7", markers = "python_version < \"3.8\""}
[package.extras]
numpy-style = ["docstring_parser (>=0.7)"]
[[package]] [[package]]
name = "pyyaml" name = "pyyaml"
version = "5.4.1" version = "6.0"
description = "YAML parser and emitter for Python" description = "YAML parser and emitter for Python"
category = "dev" category = "dev"
optional = false optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" python-versions = ">=3.6"
[[package]] [[package]]
name = "pyyaml-env-tag" name = "pyyaml-env-tag"
@ -1222,14 +1241,6 @@ category = "dev"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
[[package]]
name = "tomli-w"
version = "1.0.0"
description = "A lil' TOML writer"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]] [[package]]
name = "typed-ast" name = "typed-ast"
version = "1.5.3" version = "1.5.3"
@ -1380,22 +1391,6 @@ python-versions = ">=3.6"
[package.extras] [package.extras]
watchmedo = ["PyYAML (>=3.10)"] watchmedo = ["PyYAML (>=3.10)"]
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "yapf"
version = "0.32.0"
description = "A formatter for Python code."
category = "dev"
optional = false
python-versions = "*"
[[package]] [[package]]
name = "yappi" name = "yappi"
version = "1.3.3" version = "1.3.3"
@ -1431,7 +1426,7 @@ sqlite = []
[metadata] [metadata]
lock-version = "1.1" lock-version = "1.1"
python-versions = "^3.7.0" python-versions = "^3.7.0"
content-hash = "cb2e98135cf24ea8f088be58c7a5575699e965feb614cab4790e70e10912d89f" content-hash = "aa5088e659a67d99caef85dd908ea6d9333c6b84b11200d9f6f13f03999ca8e9"
[metadata.files] [metadata.files]
aiomysql = [ aiomysql = [
@ -1454,6 +1449,10 @@ astpretty = [
{file = "astpretty-2.1.0-py2.py3-none-any.whl", hash = "sha256:f81f14b5636f7af81fadb1e3c09ca7702ce4615500d9cc6d6829befb2dec2e3c"}, {file = "astpretty-2.1.0-py2.py3-none-any.whl", hash = "sha256:f81f14b5636f7af81fadb1e3c09ca7702ce4615500d9cc6d6829befb2dec2e3c"},
{file = "astpretty-2.1.0.tar.gz", hash = "sha256:8a801fcda604ec741f010bb36d7cbadc3ec8a182ea6fb83e20ab663463e75ff6"}, {file = "astpretty-2.1.0.tar.gz", hash = "sha256:8a801fcda604ec741f010bb36d7cbadc3ec8a182ea6fb83e20ab663463e75ff6"},
] ]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
async-timeout = [ async-timeout = [
{file = "async-timeout-4.0.2.tar.gz", hash = "sha256:2163e1640ddb52b7a8c80d0a67a08587e5d245cc9c553a74a847056bc2976b15"}, {file = "async-timeout-4.0.2.tar.gz", hash = "sha256:2163e1640ddb52b7a8c80d0a67a08587e5d245cc9c553a74a847056bc2976b15"},
{file = "async_timeout-4.0.2-py3-none-any.whl", hash = "sha256:8ca1e4fcf50d07413d66d1a5e416e42cfdf5851c981d679a09851a6853383b3c"}, {file = "async_timeout-4.0.2-py3-none-any.whl", hash = "sha256:8ca1e4fcf50d07413d66d1a5e416e42cfdf5851c981d679a09851a6853383b3c"},
@ -1523,6 +1522,10 @@ black = [
{file = "black-22.3.0-py3-none-any.whl", hash = "sha256:bc58025940a896d7e5356952228b68f793cf5fcb342be703c3a2669a1488cb72"}, {file = "black-22.3.0-py3-none-any.whl", hash = "sha256:bc58025940a896d7e5356952228b68f793cf5fcb342be703c3a2669a1488cb72"},
{file = "black-22.3.0.tar.gz", hash = "sha256:35020b8886c022ced9282b51b5a875b6d1ab0c387b31a065b84db7c33085ca79"}, {file = "black-22.3.0.tar.gz", hash = "sha256:35020b8886c022ced9282b51b5a875b6d1ab0c387b31a065b84db7c33085ca79"},
] ]
cached-property = [
{file = "cached-property-1.5.2.tar.gz", hash = "sha256:9fa5755838eecbb2d234c3aa390bd80fbd3ac6b6869109bfc1b499f7bd89a130"},
{file = "cached_property-1.5.2-py2.py3-none-any.whl", hash = "sha256:df4f613cf7ad9a588cc381aaf4a512d26265ecebd5eb9e1ba12f1319eb85a6a0"},
]
certifi = [ certifi = [
{file = "certifi-2021.10.8-py2.py3-none-any.whl", hash = "sha256:d62a0163eb4c2344ac042ab2bdf75399a71a2d8c7d47eac2e2ee91b9d6339569"}, {file = "certifi-2021.10.8-py2.py3-none-any.whl", hash = "sha256:d62a0163eb4c2344ac042ab2bdf75399a71a2d8c7d47eac2e2ee91b9d6339569"},
{file = "certifi-2021.10.8.tar.gz", hash = "sha256:78884e7c1d4b00ce3cea67b44566851c4343c120abd683433ce934a68ea58872"}, {file = "certifi-2021.10.8.tar.gz", hash = "sha256:78884e7c1d4b00ce3cea67b44566851c4343c120abd683433ce934a68ea58872"},
@ -1672,41 +1675,14 @@ databases = [
{file = "databases-0.5.5-py3-none-any.whl", hash = "sha256:97d9b9647216d1ab53ca61c059412b5c7b6e1f0bf8ce985477982ebcc7f278f3"}, {file = "databases-0.5.5-py3-none-any.whl", hash = "sha256:97d9b9647216d1ab53ca61c059412b5c7b6e1f0bf8ce985477982ebcc7f278f3"},
{file = "databases-0.5.5.tar.gz", hash = "sha256:02c6b016c1c951c21cca281dc8e2e002c60dc44026c0084aabbd8c37514aeb37"}, {file = "databases-0.5.5.tar.gz", hash = "sha256:02c6b016c1c951c21cca281dc8e2e002c60dc44026c0084aabbd8c37514aeb37"},
] ]
databind = [
{file = "databind-1.5.2-py3-none-any.whl", hash = "sha256:62e61019c5178463a08c890abb2b1a1f9de24aefb97a3ce6a6c803278c94a19d"},
{file = "databind-1.5.2.tar.gz", hash = "sha256:69e400387a92f04b429ca640f23f027ef15745e4deba15cfb1c7193968fb9760"},
]
"databind.core" = [
{file = "databind.core-1.5.2-py3-none-any.whl", hash = "sha256:3939b2878a751015bf74dd62ab02884d4aa28a11baebeea7dac22140ba2bdeaf"},
{file = "databind.core-1.5.2.tar.gz", hash = "sha256:106d95a365ccc0d955ff1a7c79bee0101fe64f2447ee1d6b4b5666ad1422dd33"},
]
"databind.json" = [
{file = "databind.json-1.5.2-py3-none-any.whl", hash = "sha256:25680d75b9002bc5dafd49018d508603b885d10c4a339f6618fd970df7d4d743"},
{file = "databind.json-1.5.2.tar.gz", hash = "sha256:1ed60b6b46b88cfdf21941523e6ea021c651f2afb280ffe85b68c7ce1dd6a19d"},
]
dataclasses = [ dataclasses = [
{file = "dataclasses-0.6-py3-none-any.whl", hash = "sha256:454a69d788c7fda44efd71e259be79577822f5e3f53f029a22d08004e951dc9f"}, {file = "dataclasses-0.6-py3-none-any.whl", hash = "sha256:454a69d788c7fda44efd71e259be79577822f5e3f53f029a22d08004e951dc9f"},
{file = "dataclasses-0.6.tar.gz", hash = "sha256:6988bd2b895eef432d562370bb707d540f32f7360ab13da45340101bc2307d84"}, {file = "dataclasses-0.6.tar.gz", hash = "sha256:6988bd2b895eef432d562370bb707d540f32f7360ab13da45340101bc2307d84"},
] ]
deprecated = [
{file = "Deprecated-1.2.13-py2.py3-none-any.whl", hash = "sha256:64756e3e14c8c5eea9795d93c524551432a0be75629f8f29e67ab8caf076c76d"},
{file = "Deprecated-1.2.13.tar.gz", hash = "sha256:43ac5335da90c31c24ba028af536a91d41d53f9e6901ddb021bcc572ce44e38d"},
]
distlib = [ distlib = [
{file = "distlib-0.3.4-py2.py3-none-any.whl", hash = "sha256:6564fe0a8f51e734df6333d08b8b94d4ea8ee6b99b5ed50613f731fd4089f34b"}, {file = "distlib-0.3.4-py2.py3-none-any.whl", hash = "sha256:6564fe0a8f51e734df6333d08b8b94d4ea8ee6b99b5ed50613f731fd4089f34b"},
{file = "distlib-0.3.4.zip", hash = "sha256:e4b58818180336dc9c529bfb9a0b58728ffc09ad92027a3f30b7cd91e3458579"}, {file = "distlib-0.3.4.zip", hash = "sha256:e4b58818180336dc9c529bfb9a0b58728ffc09ad92027a3f30b7cd91e3458579"},
] ]
docspec = [
{file = "docspec-2.0.1-py3-none-any.whl", hash = "sha256:b13328f8eb709da3e1d26c3ab6d2cef16c72744bf4bac575f08e4509ea8fd51b"},
{file = "docspec-2.0.1.tar.gz", hash = "sha256:0a70cbd2a057279adc88b6d0c0dc74ee90e46d3741b3556336f822abe2d185ac"},
]
docspec-python = [
{file = "docspec-python-2.0.1.tar.gz", hash = "sha256:9e5a8f9104f727f8f8a568c974e079f65d3bdbb1b4297bd6f6f7f8757928147f"},
{file = "docspec_python-2.0.1-py3-none-any.whl", hash = "sha256:f43bbd1590b0b9f3d7c82576d364c97604f2232e38c27430a1aeaf6672afb43f"},
]
docstring-parser = [
{file = "docstring_parser-0.11.tar.gz", hash = "sha256:93b3f8f481c7d24e37c5d9f30293c89e2933fa209421c8abd731dd3ef0715ecb"},
]
fastapi = [ fastapi = [
{file = "fastapi-0.75.2-py3-none-any.whl", hash = "sha256:a70d31f4249b6b42dbe267667d22f83af645b2d857876c97f83ca9573215784f"}, {file = "fastapi-0.75.2-py3-none-any.whl", hash = "sha256:a70d31f4249b6b42dbe267667d22f83af645b2d857876c97f83ca9573215784f"},
{file = "fastapi-0.75.2.tar.gz", hash = "sha256:b5dac161ee19d33346040d3f44d8b7a9ac09b37df9efff95891f5e7641fa482f"}, {file = "fastapi-0.75.2.tar.gz", hash = "sha256:b5dac161ee19d33346040d3f44d8b7a9ac09b37df9efff95891f5e7641fa482f"},
@ -1827,6 +1803,10 @@ greenlet = [
{file = "greenlet-1.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:013d61294b6cd8fe3242932c1c5e36e5d1db2c8afb58606c5a67efce62c1f5fd"}, {file = "greenlet-1.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:013d61294b6cd8fe3242932c1c5e36e5d1db2c8afb58606c5a67efce62c1f5fd"},
{file = "greenlet-1.1.2.tar.gz", hash = "sha256:e30f5ea4ae2346e62cedde8794a56858a67b878dd79f7df76a0767e356b1744a"}, {file = "greenlet-1.1.2.tar.gz", hash = "sha256:e30f5ea4ae2346e62cedde8794a56858a67b878dd79f7df76a0767e356b1744a"},
] ]
griffe = [
{file = "griffe-0.19.0-py3-none-any.whl", hash = "sha256:1bff0dc8692c862780c5c5ab12e19ae70855553b48cab03210c4b14402594c66"},
{file = "griffe-0.19.0.tar.gz", hash = "sha256:9fd1ae56b819e0c3c48e2d8d62f768721e39f7290e5e7de8caa010fb77073b4c"},
]
identify = [ identify = [
{file = "identify-2.5.0-py2.py3-none-any.whl", hash = "sha256:3acfe15a96e4272b4ec5662ee3e231ceba976ef63fd9980ed2ce9cc415df393f"}, {file = "identify-2.5.0-py2.py3-none-any.whl", hash = "sha256:3acfe15a96e4272b4ec5662ee3e231ceba976ef63fd9980ed2ce9cc415df393f"},
{file = "identify-2.5.0.tar.gz", hash = "sha256:c83af514ea50bf2be2c4a3f2fb349442b59dc87284558ae9ff54191bff3541d2"}, {file = "identify-2.5.0.tar.gz", hash = "sha256:c83af514ea50bf2be2c4a3f2fb349442b59dc87284558ae9ff54191bff3541d2"},
@ -1848,8 +1828,8 @@ jinja2 = [
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"}, {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
] ]
markdown = [ markdown = [
{file = "Markdown-3.3.6-py3-none-any.whl", hash = "sha256:9923332318f843411e9932237530df53162e29dc7a4e2b91e35764583c46c9a3"}, {file = "Markdown-3.3.7-py3-none-any.whl", hash = "sha256:f5da449a6e1c989a4cea2631aa8ee67caa5a2ef855d551c88f9e309f4634c621"},
{file = "Markdown-3.3.6.tar.gz", hash = "sha256:76df8ae32294ec39dcf89340382882dfa12975f87f45c3ed1ecdb1e8cefc7006"}, {file = "Markdown-3.3.7.tar.gz", hash = "sha256:cbb516f16218e643d8e0a95b309f77eb118cb138d39a4f27851e6a63581db874"},
] ]
markupsafe = [ markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"}, {file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
@ -1905,6 +1885,18 @@ mkdocs = [
{file = "mkdocs-1.3.0-py3-none-any.whl", hash = "sha256:26bd2b03d739ac57a3e6eed0b7bcc86168703b719c27b99ad6ca91dc439aacde"}, {file = "mkdocs-1.3.0-py3-none-any.whl", hash = "sha256:26bd2b03d739ac57a3e6eed0b7bcc86168703b719c27b99ad6ca91dc439aacde"},
{file = "mkdocs-1.3.0.tar.gz", hash = "sha256:b504405b04da38795fec9b2e5e28f6aa3a73bb0960cb6d5d27ead28952bd35ea"}, {file = "mkdocs-1.3.0.tar.gz", hash = "sha256:b504405b04da38795fec9b2e5e28f6aa3a73bb0960cb6d5d27ead28952bd35ea"},
] ]
mkdocs-autorefs = [
{file = "mkdocs-autorefs-0.4.1.tar.gz", hash = "sha256:70748a7bd025f9ecd6d6feeba8ba63f8e891a1af55f48e366d6d6e78493aba84"},
{file = "mkdocs_autorefs-0.4.1-py3-none-any.whl", hash = "sha256:a2248a9501b29dc0cc8ba4c09f4f47ff121945f6ce33d760f145d6f89d313f5b"},
]
mkdocs-gen-files = [
{file = "mkdocs-gen-files-0.3.4.tar.gz", hash = "sha256:c69188486bdc1e74bd2b9b7ebbde9f9eb21052ae7762f1b35420cfbfc6d7122e"},
{file = "mkdocs_gen_files-0.3.4-py3-none-any.whl", hash = "sha256:07f43245c87a03cfb03884e767655c2a61def24d07e47fb3a8d26b1581524d6a"},
]
mkdocs-literate-nav = [
{file = "mkdocs-literate-nav-0.4.1.tar.gz", hash = "sha256:9efe26b662f2f901cae5807bfd51446d30ea7e033c2bc43a15d6282c7dfac1ab"},
{file = "mkdocs_literate_nav-0.4.1-py3-none-any.whl", hash = "sha256:a4b761792ba21defbe2dfd5e0de6ba451639e1ca0f0661c37eda83cc6261e4f9"},
]
mkdocs-material = [ mkdocs-material = [
{file = "mkdocs-material-8.2.13.tar.gz", hash = "sha256:505408fe001d668543236f5db5a88771460ad83ef7b58826630cc1f8b7e63099"}, {file = "mkdocs-material-8.2.13.tar.gz", hash = "sha256:505408fe001d668543236f5db5a88771460ad83ef7b58826630cc1f8b7e63099"},
{file = "mkdocs_material-8.2.13-py2.py3-none-any.whl", hash = "sha256:2666f1d7d6a8dc28dda1e777f77add12799e66bd00250de99914a33525763816"}, {file = "mkdocs_material-8.2.13-py2.py3-none-any.whl", hash = "sha256:2666f1d7d6a8dc28dda1e777f77add12799e66bd00250de99914a33525763816"},
@ -1913,6 +1905,22 @@ mkdocs-material-extensions = [
{file = "mkdocs-material-extensions-1.0.3.tar.gz", hash = "sha256:bfd24dfdef7b41c312ede42648f9eb83476ea168ec163b613f9abd12bbfddba2"}, {file = "mkdocs-material-extensions-1.0.3.tar.gz", hash = "sha256:bfd24dfdef7b41c312ede42648f9eb83476ea168ec163b613f9abd12bbfddba2"},
{file = "mkdocs_material_extensions-1.0.3-py3-none-any.whl", hash = "sha256:a82b70e533ce060b2a5d9eb2bc2e1be201cf61f901f93704b4acf6e3d5983a44"}, {file = "mkdocs_material_extensions-1.0.3-py3-none-any.whl", hash = "sha256:a82b70e533ce060b2a5d9eb2bc2e1be201cf61f901f93704b4acf6e3d5983a44"},
] ]
mkdocs-section-index = [
{file = "mkdocs-section-index-0.3.4.tar.gz", hash = "sha256:050151bfe7c0e374f197335e0ecb19c45b53dbafc0f817aa203f0cc24bcf7d10"},
{file = "mkdocs_section_index-0.3.4-py3-none-any.whl", hash = "sha256:214f7a6df9d35a5772e9577f3899ff3edd90044064589e6dd4d84615b72a8024"},
]
mkdocstrings = [
{file = "mkdocstrings-0.18.0-py3-none-any.whl", hash = "sha256:75e277f6a56a894a727efcf6d418d36cd43d4db7da9614c2dc23300e257d95ad"},
{file = "mkdocstrings-0.18.0.tar.gz", hash = "sha256:01d8ab962fc1f388c9b15cbf8c078b8738f92adf983b626d74135aaee2bce33a"},
]
mkdocstrings-python = [
{file = "mkdocstrings-python-0.6.6.tar.gz", hash = "sha256:37281696b9f199624ae420e0625b6659b7fdfbea736618bce7fd978682dea3b1"},
{file = "mkdocstrings_python-0.6.6-py3-none-any.whl", hash = "sha256:c118438d3cb4b14c492a51d109f4e5b27ab06ba19b099d624430dfd904926152"},
]
mkdocstrings-python-legacy = [
{file = "mkdocstrings-python-legacy-0.2.2.tar.gz", hash = "sha256:f0e7ec6a19750581b752acb38f6b32fcd1efe006f14f6703125d2c2c9a5c6f02"},
{file = "mkdocstrings_python_legacy-0.2.2-py3-none-any.whl", hash = "sha256:379107a3a5b8db9b462efc4493c122efe21e825e3702425dbd404621302a563a"},
]
mr-proper = [ mr-proper = [
{file = "mr_proper-0.0.7-py3-none-any.whl", hash = "sha256:74a1b60240c46f10ba518707ef72811a01e5c270da0a78b5dd2dd923d99fdb14"}, {file = "mr_proper-0.0.7-py3-none-any.whl", hash = "sha256:74a1b60240c46f10ba518707ef72811a01e5c270da0a78b5dd2dd923d99fdb14"},
{file = "mr_proper-0.0.7.tar.gz", hash = "sha256:03b517b19e617537f711ce418b125e5f2efd82ec881539cdee83195c78c14a02"}, {file = "mr_proper-0.0.7.tar.gz", hash = "sha256:03b517b19e617537f711ce418b125e5f2efd82ec881539cdee83195c78c14a02"},
@ -1957,10 +1965,6 @@ nodeenv = [
{file = "nodeenv-1.6.0-py2.py3-none-any.whl", hash = "sha256:621e6b7076565ddcacd2db0294c0381e01fd28945ab36bcf00f41c5daf63bef7"}, {file = "nodeenv-1.6.0-py2.py3-none-any.whl", hash = "sha256:621e6b7076565ddcacd2db0294c0381e01fd28945ab36bcf00f41c5daf63bef7"},
{file = "nodeenv-1.6.0.tar.gz", hash = "sha256:3ef13ff90291ba2a4a7a4ff9a979b63ffdd00a464dbe04acf0ea6471517a4c2b"}, {file = "nodeenv-1.6.0.tar.gz", hash = "sha256:3ef13ff90291ba2a4a7a4ff9a979b63ffdd00a464dbe04acf0ea6471517a4c2b"},
] ]
"nr.util" = [
{file = "nr.util-0.8.11-py3-none-any.whl", hash = "sha256:2513458a7bffca4b67b0c19aabf09792d2062faed59c214b54e3b17d3c50c4c9"},
{file = "nr.util-0.8.11.tar.gz", hash = "sha256:fc4562d41cf1276d469cbd2797d02886e52133f12db1145ac9a209a44181c601"},
]
orjson = [ orjson = [
{file = "orjson-3.6.8-cp310-cp310-macosx_10_7_x86_64.whl", hash = "sha256:3a287a650458de2211db03681b71c3e5cb2212b62f17a39df8ad99fc54855d0f"}, {file = "orjson-3.6.8-cp310-cp310-macosx_10_7_x86_64.whl", hash = "sha256:3a287a650458de2211db03681b71c3e5cb2212b62f17a39df8ad99fc54855d0f"},
{file = "orjson-3.6.8-cp310-cp310-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:5204e25c12cea58e524fc82f7c27ed0586f592f777b33075a92ab7b3eb3687c2"}, {file = "orjson-3.6.8-cp310-cp310-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:5204e25c12cea58e524fc82f7c27ed0586f592f777b33075a92ab7b3eb3687c2"},
@ -2004,8 +2008,8 @@ pathspec = [
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"}, {file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
] ]
pbr = [ pbr = [
{file = "pbr-5.8.1-py2.py3-none-any.whl", hash = "sha256:27108648368782d07bbf1cb468ad2e2eeef29086affd14087a6d04b7de8af4ec"}, {file = "pbr-5.9.0-py2.py3-none-any.whl", hash = "sha256:e547125940bcc052856ded43be8e101f63828c2d94239ffbe2b327ba3d5ccf0a"},
{file = "pbr-5.8.1.tar.gz", hash = "sha256:66bc5a34912f408bb3925bf21231cb6f59206267b7f63f3503ef865c1a292e25"}, {file = "pbr-5.9.0.tar.gz", hash = "sha256:e8dca2f4b43560edef58813969f52a56cef023146cbb8931626db80e6c1c4308"},
] ]
platformdirs = [ platformdirs = [
{file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"}, {file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
@ -2126,10 +2130,6 @@ pydantic = [
{file = "pydantic-1.9.0-py3-none-any.whl", hash = "sha256:085ca1de245782e9b46cefcf99deecc67d418737a1fd3f6a4f511344b613a5b3"}, {file = "pydantic-1.9.0-py3-none-any.whl", hash = "sha256:085ca1de245782e9b46cefcf99deecc67d418737a1fd3f6a4f511344b613a5b3"},
{file = "pydantic-1.9.0.tar.gz", hash = "sha256:742645059757a56ecd886faf4ed2441b9c0cd406079c2b4bee51bcc3fbcd510a"}, {file = "pydantic-1.9.0.tar.gz", hash = "sha256:742645059757a56ecd886faf4ed2441b9c0cd406079c2b4bee51bcc3fbcd510a"},
] ]
pydoc-markdown = [
{file = "pydoc-markdown-4.6.3.tar.gz", hash = "sha256:82aa424de7e390e4a34ac200b189bb011e33be18bb089e83b15af5742cf21dc7"},
{file = "pydoc_markdown-4.6.3-py3-none-any.whl", hash = "sha256:b79da5b972be5c24db41ed5ce65fedd0031aa9cd023467d42cd47882a5e92f98"},
]
pyflakes = [ pyflakes = [
{file = "pyflakes-2.3.1-py2.py3-none-any.whl", hash = "sha256:7893783d01b8a89811dd72d7dfd4d84ff098e5eed95cfa8905b22bbffe52efc3"}, {file = "pyflakes-2.3.1-py2.py3-none-any.whl", hash = "sha256:7893783d01b8a89811dd72d7dfd4d84ff098e5eed95cfa8905b22bbffe52efc3"},
{file = "pyflakes-2.3.1.tar.gz", hash = "sha256:f5bc8ecabc05bb9d291eb5203d6810b49040f6ff446a756326104746cc00c1db"}, {file = "pyflakes-2.3.1.tar.gz", hash = "sha256:f5bc8ecabc05bb9d291eb5203d6810b49040f6ff446a756326104746cc00c1db"},
@ -2167,36 +2167,44 @@ python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"}, {file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"}, {file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
] ]
pytkdocs = [
{file = "pytkdocs-0.16.1-py3-none-any.whl", hash = "sha256:a8c3f46ecef0b92864cc598e9101e9c4cf832ebbf228f50c84aa5dd850aac379"},
{file = "pytkdocs-0.16.1.tar.gz", hash = "sha256:e2ccf6dfe9dbbceb09818673f040f1a7c32ed0bffb2d709b06be6453c4026045"},
]
pyyaml = [ pyyaml = [
{file = "PyYAML-5.4.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:3b2b1824fe7112845700f815ff6a489360226a5609b96ec2190a45e62a9fc922"}, {file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
{file = "PyYAML-5.4.1-cp27-cp27m-win32.whl", hash = "sha256:129def1b7c1bf22faffd67b8f3724645203b79d8f4cc81f674654d9902cb4393"}, {file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
{file = "PyYAML-5.4.1-cp27-cp27m-win_amd64.whl", hash = "sha256:4465124ef1b18d9ace298060f4eccc64b0850899ac4ac53294547536533800c8"}, {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
{file = "PyYAML-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:bb4191dfc9306777bc594117aee052446b3fa88737cd13b7188d0e7aa8162185"}, {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
{file = "PyYAML-5.4.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:6c78645d400265a062508ae399b60b8c167bf003db364ecb26dcab2bda048253"}, {file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
{file = "PyYAML-5.4.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:4e0583d24c881e14342eaf4ec5fbc97f934b999a6828693a99157fde912540cc"}, {file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
{file = "PyYAML-5.4.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:72a01f726a9c7851ca9bfad6fd09ca4e090a023c00945ea05ba1638c09dc3347"}, {file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
{file = "PyYAML-5.4.1-cp36-cp36m-manylinux2014_s390x.whl", hash = "sha256:895f61ef02e8fed38159bb70f7e100e00f471eae2bc838cd0f4ebb21e28f8541"}, {file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
{file = "PyYAML-5.4.1-cp36-cp36m-win32.whl", hash = "sha256:3bd0e463264cf257d1ffd2e40223b197271046d09dadf73a0fe82b9c1fc385a5"}, {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
{file = "PyYAML-5.4.1-cp36-cp36m-win_amd64.whl", hash = "sha256:e4fac90784481d221a8e4b1162afa7c47ed953be40d31ab4629ae917510051df"}, {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
{file = "PyYAML-5.4.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5accb17103e43963b80e6f837831f38d314a0495500067cb25afab2e8d7a4018"}, {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
{file = "PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:e1d4970ea66be07ae37a3c2e48b5ec63f7ba6804bdddfdbd3cfd954d25a82e63"}, {file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
{file = "PyYAML-5.4.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:cb333c16912324fd5f769fff6bc5de372e9e7a202247b48870bc251ed40239aa"}, {file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
{file = "PyYAML-5.4.1-cp37-cp37m-manylinux2014_s390x.whl", hash = "sha256:fe69978f3f768926cfa37b867e3843918e012cf83f680806599ddce33c2c68b0"}, {file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
{file = "PyYAML-5.4.1-cp37-cp37m-win32.whl", hash = "sha256:dd5de0646207f053eb0d6c74ae45ba98c3395a571a2891858e87df7c9b9bd51b"}, {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
{file = "PyYAML-5.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:08682f6b72c722394747bddaf0aa62277e02557c0fd1c42cb853016a38f8dedf"}, {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
{file = "PyYAML-5.4.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d2d9808ea7b4af864f35ea216be506ecec180628aced0704e34aca0b040ffe46"}, {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
{file = "PyYAML-5.4.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:8c1be557ee92a20f184922c7b6424e8ab6691788e6d86137c5d93c1a6ec1b8fb"}, {file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
{file = "PyYAML-5.4.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:fd7f6999a8070df521b6384004ef42833b9bd62cfee11a09bda1079b4b704247"}, {file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
{file = "PyYAML-5.4.1-cp38-cp38-manylinux2014_s390x.whl", hash = "sha256:bfb51918d4ff3d77c1c856a9699f8492c612cde32fd3bcd344af9be34999bfdc"}, {file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
{file = "PyYAML-5.4.1-cp38-cp38-win32.whl", hash = "sha256:fa5ae20527d8e831e8230cbffd9f8fe952815b2b7dae6ffec25318803a7528fc"}, {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
{file = "PyYAML-5.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:0f5f5786c0e09baddcd8b4b45f20a7b5d61a7e7e99846e3c799b05c7c53fa696"}, {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
{file = "PyYAML-5.4.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:294db365efa064d00b8d1ef65d8ea2c3426ac366c0c4368d930bf1c5fb497f77"}, {file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
{file = "PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:74c1485f7707cf707a7aef42ef6322b8f97921bd89be2ab6317fd782c2d53183"}, {file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
{file = "PyYAML-5.4.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:d483ad4e639292c90170eb6f7783ad19490e7a8defb3e46f97dfe4bacae89122"}, {file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
{file = "PyYAML-5.4.1-cp39-cp39-manylinux2014_s390x.whl", hash = "sha256:fdc842473cd33f45ff6bce46aea678a54e3d21f1b61a7750ce3c498eedfe25d6"}, {file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
{file = "PyYAML-5.4.1-cp39-cp39-win32.whl", hash = "sha256:49d4cdd9065b9b6e206d0595fee27a96b5dd22618e7520c33204a4a3239d5b10"}, {file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
{file = "PyYAML-5.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:c20cfa2d49991c8b4147af39859b167664f2ad4561704ee74c1de03318e898db"}, {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
{file = "PyYAML-5.4.1.tar.gz", hash = "sha256:607774cbba28732bfa802b54baa7484215f530991055bb562efbed5b2f20a45e"}, {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
{file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
{file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
{file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
] ]
pyyaml-env-tag = [ pyyaml-env-tag = [
{file = "pyyaml_env_tag-0.1-py3-none-any.whl", hash = "sha256:af31106dec8a4d68c60207c1886031cbf839b68aa7abccdb19868200532c2069"}, {file = "pyyaml_env_tag-0.1-py3-none-any.whl", hash = "sha256:af31106dec8a4d68c60207c1886031cbf839b68aa7abccdb19868200532c2069"},
@ -2276,10 +2284,6 @@ tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"}, {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"}, {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
] ]
tomli-w = [
{file = "tomli_w-1.0.0-py3-none-any.whl", hash = "sha256:9f2a07e8be30a0729e533ec968016807069991ae2fd921a78d42f429ae5f4463"},
{file = "tomli_w-1.0.0.tar.gz", hash = "sha256:f463434305e0336248cac9c2dc8076b707d8a12d019dd349f5c1e382dd1ae1b9"},
]
typed-ast = [ typed-ast = [
{file = "typed_ast-1.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9ad3b48cf2b487be140072fb86feff36801487d4abb7382bb1929aaac80638ea"}, {file = "typed_ast-1.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9ad3b48cf2b487be140072fb86feff36801487d4abb7382bb1929aaac80638ea"},
{file = "typed_ast-1.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:542cd732351ba8235f20faa0fc7398946fe1a57f2cdb289e5497e1e7f48cfedb"}, {file = "typed_ast-1.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:542cd732351ba8235f20faa0fc7398946fe1a57f2cdb289e5497e1e7f48cfedb"},
@ -2389,76 +2393,6 @@ watchdog = [
{file = "watchdog-2.1.7-py3-none-win_ia64.whl", hash = "sha256:351e09b6d9374d5bcb947e6ac47a608ec25b9d70583e9db00b2fcdb97b00b572"}, {file = "watchdog-2.1.7-py3-none-win_ia64.whl", hash = "sha256:351e09b6d9374d5bcb947e6ac47a608ec25b9d70583e9db00b2fcdb97b00b572"},
{file = "watchdog-2.1.7.tar.gz", hash = "sha256:3fd47815353be9c44eebc94cc28fe26b2b0c5bd889dafc4a5a7cbdf924143480"}, {file = "watchdog-2.1.7.tar.gz", hash = "sha256:3fd47815353be9c44eebc94cc28fe26b2b0c5bd889dafc4a5a7cbdf924143480"},
] ]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
yapf = [
{file = "yapf-0.32.0-py2.py3-none-any.whl", hash = "sha256:8fea849025584e486fd06d6ba2bed717f396080fd3cc236ba10cb97c4c51cf32"},
{file = "yapf-0.32.0.tar.gz", hash = "sha256:a3f5085d37ef7e3e004c4ba9f9b3e40c54ff1901cd111f05145ae313a7c67d1b"},
]
yappi = [ yappi = [
{file = "yappi-1.3.3.tar.gz", hash = "sha256:855890cd9a90d833dd2df632d648de8ccd0a4c3131f1edc8abd004db0625b5e8"}, {file = "yappi-1.3.3.tar.gz", hash = "sha256:855890cd9a90d833dd2df632d648de8ccd0a4c3131f1edc8abd004db0625b5e8"},
] ]

View File

@ -1,178 +0,0 @@
output_directory: docs/api
loaders:
- type: python
search_path: [ormar/]
processors:
- type: filter
documented_only: true
skip_empty_modules: false
exclude_private: false
exclude_special: false
- type: sphinx
- type: crossref
renderer:
type: mkdocs
pages:
- title: Models
children:
- title: Model Metaclass
contents:
- models.metaclass.*
- title: Model
contents:
- models.model.*
- title: Model Row
contents:
- models.model_row.*
- title: New BaseModel
contents:
- models.newbasemodel.*
- title: Excludable Items
contents:
- models.excludable.*
- title: Traversible
contents:
- models.traversible.*
- title: Model Table Proxy
contents:
- models.modelproxy.*
- title: Helpers
children:
- title: models
contents:
- models.helpers.models.*
- title: pydantic
contents:
- models.helpers.pydantic.*
- title: relations
contents:
- models.helpers.relations.*
- title: sqlalchemy
contents:
- models.helpers.sqlalchemy.*
- title: validation
contents:
- models.helpers.validation.*
- title: related names validation
contents:
- models.helpers.related_names_validation.*
- title: Mixins
children:
- title: Alias Mixin
contents:
- models.mixins.alias_mixin.*
- title: Excludable Mixin
contents:
- models.mixins.excludable_mixin.*
- title: Merge Model Mixin
contents:
- models.mixins.merge_mixin.*
- title: Prefetch Query Mixin
contents:
- models.mixins.prefetch_mixin.*
- title: Relation Mixin
contents:
- models.mixins.relation_mixin.*
- title: Save Prepare Mixin
contents:
- models.mixins.save_mixin.*
- title: Descriptors
children:
- title: descriptors
contents:
- models.descriptors.descriptors.*
- title: Fields
children:
- title: Base Field
contents:
- fields.base.*
- title: Model Fields
contents:
- fields.model_fields.*
- title: Foreign Key
contents:
- fields.foreign_key.*
- title: Many To Many
contents:
- fields.many_to_many.*
- title: Decorators
contents:
- decorators.property_field.*
- title: Query Set
children:
- title: Query Set
contents:
- queryset.queryset.*
- title: Query
contents:
- queryset.query.*
- title: Prefetch Query
contents:
- queryset.prefetch_query.*
- title: Join
contents:
- queryset.join.*
- title: Clause
contents:
- queryset.clause.*
- title: Filter Query
contents:
- queryset.filter_query.*
- title: Order Query
contents:
- queryset.order_query.*
- title: Limit Query
contents:
- queryset.limit_query.*
- title: Offset Query
contents:
- queryset.offset_query.*
- title: Field accessor
contents:
- queryset.field_accessor.*
- title: Reverse Alias Resolver
contents:
- queryset.reverse_alias_resolver.*
- title: Utils
contents:
- queryset.utils.*
- title: Relations
children:
- title: Relation Manager
contents:
- relations.relation_manager.*
- title: Relation
contents:
- relations.relation.*
- title: Relation Proxy
contents:
- relations.relation_proxy.*
- title: Queryset Proxy
contents:
- relations.querysetproxy.*
- title: Alias Manager
contents:
- relations.alias_manager.*
- title: Utils
contents:
- relations.utils.*
- title: Signals
children:
- title: Signal
contents:
- signals.*
- title: Decorators
contents:
- decorators.signals.*
- title: Exceptions
contents:
- exceptions.*
mkdocs_config:
site_name: Ormar
theme:
name: material
highlightjs: true
hljs_languages:
- python
palette:
primary: indigo

View File

@ -110,7 +110,10 @@ types-toml = "^0.10.6"
mkdocs = "^1.2.3" mkdocs = "^1.2.3"
mkdocs-material = ">=8.1.2,<8.3" mkdocs-material = ">=8.1.2,<8.3"
mkdocs-material-extensions = "^1.0.3" mkdocs-material-extensions = "^1.0.3"
pydoc-markdown = "^4.5.0" mkdocstrings = {version = "==0.18", extras = ["python"]}
mkdocs-gen-files = "^0.3.4"
mkdocs-literate-nav = "^0.4.1"
mkdocs-section-index = "^0.3.4"
dataclasses = { version = ">=0.6.0,<0.8 || >0.8,<1.0.0" } dataclasses = { version = ">=0.6.0,<0.8 || >0.8,<1.0.0" }
# Performance testing # Performance testing
@ -158,3 +161,8 @@ ignore_errors = true
module = ["sqlalchemy.*", "asyncpg"] module = ["sqlalchemy.*", "asyncpg"]
ignore_missing_imports = true ignore_missing_imports = true
[tool.yapf]
based_on_style = "pep8"
disable_ending_comma_heuristic = true
split_arguments_when_comma_terminated = true