Compare commits
6 commits
c610a3e284
...
d8c925b817
| Author | SHA1 | Date | |
|---|---|---|---|
| d8c925b817 | |||
| 33ba248c49 | |||
| b63d69c81d | |||
| d1976a5fd8 | |||
| ea12318b88 | |||
| 28059dcedf |
47 changed files with 8480 additions and 35 deletions
5
.gitignore
vendored
5
.gitignore
vendored
|
|
@ -1,2 +1,7 @@
|
|||
.direnv
|
||||
result
|
||||
__pycache__/
|
||||
*.egg-info/
|
||||
.venv/
|
||||
.pytest_cache/
|
||||
*.pyc
|
||||
|
|
|
|||
674
LICENSE
Normal file
674
LICENSE
Normal file
|
|
@ -0,0 +1,674 @@
|
|||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
||||
3
agent/nix_builder_autoscaler/__init__.py
Normal file
3
agent/nix_builder_autoscaler/__init__.py
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
"""Nix builder autoscaler daemon."""
|
||||
|
||||
__version__ = "0.1.0"
|
||||
255
agent/nix_builder_autoscaler/__main__.py
Normal file
255
agent/nix_builder_autoscaler/__main__.py
Normal file
|
|
@ -0,0 +1,255 @@
|
|||
"""Daemon entry point: python -m nix_builder_autoscaler."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import signal
|
||||
import threading
|
||||
import time
|
||||
from pathlib import Path
|
||||
from types import FrameType
|
||||
|
||||
import uvicorn
|
||||
|
||||
from .api import create_app
|
||||
from .config import AppConfig, load_config
|
||||
from .logging import setup_logging
|
||||
from .metrics import MetricsRegistry
|
||||
from .providers.clock import SystemClock
|
||||
from .providers.haproxy import HAProxyRuntime
|
||||
from .reconciler import Reconciler
|
||||
from .runtime.ec2 import EC2Runtime
|
||||
from .scheduler import scheduling_tick
|
||||
from .state_db import StateDB
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LoopHealth:
|
||||
"""Thread-safe last-success timestamps for daemon loops."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._lock = threading.Lock()
|
||||
self._last_success: dict[str, float] = {}
|
||||
|
||||
def mark_success(self, loop_name: str) -> None:
|
||||
with self._lock:
|
||||
self._last_success[loop_name] = time.monotonic()
|
||||
|
||||
def is_fresh(self, loop_name: str, max_age_seconds: float) -> bool:
|
||||
with self._lock:
|
||||
last = self._last_success.get(loop_name)
|
||||
if last is None:
|
||||
return False
|
||||
return (time.monotonic() - last) <= max_age_seconds
|
||||
|
||||
|
||||
def _max_staleness(interval_seconds: float) -> float:
|
||||
return max(interval_seconds * 3.0, 15.0)
|
||||
|
||||
|
||||
def _scheduler_loop(
|
||||
db: StateDB,
|
||||
runtime: EC2Runtime,
|
||||
config: AppConfig,
|
||||
clock: SystemClock,
|
||||
metrics: MetricsRegistry,
|
||||
stop_event: threading.Event,
|
||||
loop_health: LoopHealth,
|
||||
) -> None:
|
||||
while not stop_event.is_set():
|
||||
try:
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
loop_health.mark_success("scheduler")
|
||||
except Exception:
|
||||
log.exception("scheduler_tick_failed")
|
||||
stop_event.wait(config.scheduler.tick_seconds)
|
||||
|
||||
|
||||
def _reconciler_loop(
|
||||
reconciler: Reconciler,
|
||||
config: AppConfig,
|
||||
stop_event: threading.Event,
|
||||
loop_health: LoopHealth,
|
||||
reconcile_lock: threading.Lock,
|
||||
) -> None:
|
||||
while not stop_event.is_set():
|
||||
try:
|
||||
with reconcile_lock:
|
||||
reconciler.tick()
|
||||
loop_health.mark_success("reconciler")
|
||||
except Exception:
|
||||
log.exception("reconciler_tick_failed")
|
||||
stop_event.wait(config.scheduler.reconcile_seconds)
|
||||
|
||||
|
||||
def _metrics_health_loop(
|
||||
metrics: MetricsRegistry,
|
||||
stop_event: threading.Event,
|
||||
loop_health: LoopHealth,
|
||||
interval_seconds: float,
|
||||
) -> None:
|
||||
while not stop_event.is_set():
|
||||
try:
|
||||
metrics.gauge("autoscaler_loop_up", {"loop": "scheduler"}, 1.0)
|
||||
metrics.gauge("autoscaler_loop_up", {"loop": "reconciler"}, 1.0)
|
||||
metrics.gauge("autoscaler_loop_up", {"loop": "metrics"}, 1.0)
|
||||
loop_health.mark_success("metrics")
|
||||
except Exception:
|
||||
log.exception("metrics_health_tick_failed")
|
||||
stop_event.wait(interval_seconds)
|
||||
|
||||
|
||||
def _parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="nix-builder-autoscaler",
|
||||
description="Nix builder autoscaler daemon",
|
||||
)
|
||||
parser.add_argument("--config", required=True, help="Path to TOML config file")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Parse config, initialize components, and run the daemon."""
|
||||
args = _parse_args()
|
||||
config = load_config(Path(args.config))
|
||||
setup_logging(config.server.log_level)
|
||||
|
||||
clock = SystemClock()
|
||||
db = StateDB(config.server.db_path, clock=clock)
|
||||
db.init_schema()
|
||||
db.init_slots(
|
||||
config.haproxy.slot_prefix,
|
||||
config.haproxy.slot_count,
|
||||
config.capacity.default_system,
|
||||
config.haproxy.backend,
|
||||
)
|
||||
|
||||
runtime = EC2Runtime(config.aws)
|
||||
haproxy = HAProxyRuntime(
|
||||
config.haproxy.runtime_socket,
|
||||
config.haproxy.backend,
|
||||
config.haproxy.slot_prefix,
|
||||
)
|
||||
metrics = MetricsRegistry()
|
||||
reconciler = Reconciler(db, runtime, haproxy, config, clock, metrics)
|
||||
reconciler.tick()
|
||||
|
||||
stop_event = threading.Event()
|
||||
scheduler_thread: threading.Thread | None = None
|
||||
reconciler_thread: threading.Thread | None = None
|
||||
metrics_thread: threading.Thread | None = None
|
||||
server: uvicorn.Server | None = None
|
||||
loop_health = LoopHealth()
|
||||
reconcile_lock = threading.Lock()
|
||||
metrics_interval = 5.0
|
||||
|
||||
def scheduler_running() -> bool:
|
||||
return scheduler_thread is not None and scheduler_thread.is_alive()
|
||||
|
||||
def reconciler_running() -> bool:
|
||||
return reconciler_thread is not None and reconciler_thread.is_alive()
|
||||
|
||||
def metrics_running() -> bool:
|
||||
return metrics_thread is not None and metrics_thread.is_alive()
|
||||
|
||||
def ready_check() -> bool:
|
||||
checks = [
|
||||
("scheduler", scheduler_running(), _max_staleness(config.scheduler.tick_seconds)),
|
||||
(
|
||||
"reconciler",
|
||||
reconciler_running(),
|
||||
_max_staleness(config.scheduler.reconcile_seconds),
|
||||
),
|
||||
("metrics", metrics_running(), _max_staleness(metrics_interval)),
|
||||
]
|
||||
for loop_name, alive, max_age in checks:
|
||||
if not alive:
|
||||
return False
|
||||
if not loop_health.is_fresh(loop_name, max_age):
|
||||
return False
|
||||
return True
|
||||
|
||||
def reconcile_now() -> dict[str, object]:
|
||||
with reconcile_lock:
|
||||
reconciler.tick()
|
||||
loop_health.mark_success("reconciler")
|
||||
return {"triggered": True}
|
||||
|
||||
app = create_app(
|
||||
db,
|
||||
config,
|
||||
clock,
|
||||
metrics,
|
||||
runtime=runtime,
|
||||
haproxy=haproxy,
|
||||
scheduler_running=scheduler_running,
|
||||
reconciler_running=reconciler_running,
|
||||
ready_check=ready_check,
|
||||
reconcile_now=reconcile_now,
|
||||
)
|
||||
|
||||
loop_health.mark_success("scheduler")
|
||||
loop_health.mark_success("reconciler")
|
||||
loop_health.mark_success("metrics")
|
||||
|
||||
scheduler_thread = threading.Thread(
|
||||
target=_scheduler_loop,
|
||||
name="autoscaler-scheduler",
|
||||
args=(db, runtime, config, clock, metrics, stop_event, loop_health),
|
||||
daemon=True,
|
||||
)
|
||||
reconciler_thread = threading.Thread(
|
||||
target=_reconciler_loop,
|
||||
name="autoscaler-reconciler",
|
||||
args=(reconciler, config, stop_event, loop_health, reconcile_lock),
|
||||
daemon=True,
|
||||
)
|
||||
metrics_thread = threading.Thread(
|
||||
target=_metrics_health_loop,
|
||||
name="autoscaler-metrics-health",
|
||||
args=(metrics, stop_event, loop_health, metrics_interval),
|
||||
daemon=True,
|
||||
)
|
||||
|
||||
scheduler_thread.start()
|
||||
reconciler_thread.start()
|
||||
metrics_thread.start()
|
||||
|
||||
socket_path = Path(config.server.socket_path)
|
||||
socket_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
if socket_path.exists():
|
||||
socket_path.unlink()
|
||||
|
||||
uvicorn_config = uvicorn.Config(
|
||||
app=app,
|
||||
uds=config.server.socket_path,
|
||||
log_level=config.server.log_level.lower(),
|
||||
)
|
||||
server = uvicorn.Server(uvicorn_config)
|
||||
|
||||
def _handle_signal(signum: int, _: FrameType | None) -> None:
|
||||
log.info("shutdown_signal", extra={"signal": signum})
|
||||
stop_event.set()
|
||||
if server is not None:
|
||||
server.should_exit = True
|
||||
|
||||
signal.signal(signal.SIGTERM, _handle_signal)
|
||||
signal.signal(signal.SIGINT, _handle_signal)
|
||||
|
||||
try:
|
||||
server.run()
|
||||
finally:
|
||||
stop_event.set()
|
||||
if scheduler_thread is not None:
|
||||
scheduler_thread.join(timeout=10)
|
||||
if reconciler_thread is not None:
|
||||
reconciler_thread.join(timeout=10)
|
||||
if metrics_thread is not None:
|
||||
metrics_thread.join(timeout=10)
|
||||
db.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
303
agent/nix_builder_autoscaler/api.py
Normal file
303
agent/nix_builder_autoscaler/api.py
Normal file
|
|
@ -0,0 +1,303 @@
|
|||
"""FastAPI application for the autoscaler daemon."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import uuid
|
||||
from collections.abc import Callable
|
||||
from datetime import datetime
|
||||
from typing import TYPE_CHECKING, NoReturn
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Request, Response
|
||||
from fastapi.responses import JSONResponse
|
||||
from pydantic import BaseModel
|
||||
|
||||
from .models import (
|
||||
CapacityHint,
|
||||
ErrorDetail,
|
||||
ErrorResponse,
|
||||
HealthResponse,
|
||||
ReservationPhase,
|
||||
ReservationRequest,
|
||||
ReservationResponse,
|
||||
SlotInfo,
|
||||
SlotState,
|
||||
StateSummary,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .config import AppConfig
|
||||
from .metrics import MetricsRegistry
|
||||
from .providers.clock import Clock
|
||||
from .providers.haproxy import HAProxyRuntime
|
||||
from .runtime.base import RuntimeAdapter
|
||||
from .state_db import StateDB
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SlotAdminRequest(BaseModel):
|
||||
"""Admin action request that targets a slot."""
|
||||
|
||||
slot_id: str
|
||||
|
||||
|
||||
def _parse_required_dt(value: str) -> datetime:
|
||||
return datetime.fromisoformat(value)
|
||||
|
||||
|
||||
def _parse_optional_dt(value: str | None) -> datetime | None:
|
||||
if value is None:
|
||||
return None
|
||||
return datetime.fromisoformat(value)
|
||||
|
||||
|
||||
def _resv_to_response(resv: dict) -> ReservationResponse:
|
||||
return ReservationResponse(
|
||||
reservation_id=str(resv["reservation_id"]),
|
||||
phase=ReservationPhase(str(resv["phase"])),
|
||||
slot=resv.get("slot_id"),
|
||||
instance_id=resv.get("instance_id"),
|
||||
system=str(resv["system"]),
|
||||
created_at=_parse_required_dt(str(resv["created_at"])),
|
||||
updated_at=_parse_required_dt(str(resv["updated_at"])),
|
||||
expires_at=_parse_required_dt(str(resv["expires_at"])),
|
||||
released_at=_parse_optional_dt(resv.get("released_at")),
|
||||
)
|
||||
|
||||
|
||||
def _slot_to_info(slot: dict) -> SlotInfo:
|
||||
return SlotInfo(
|
||||
slot_id=str(slot["slot_id"]),
|
||||
system=str(slot["system"]),
|
||||
state=SlotState(str(slot["state"])),
|
||||
instance_id=slot.get("instance_id"),
|
||||
instance_ip=slot.get("instance_ip"),
|
||||
lease_count=int(slot["lease_count"]),
|
||||
last_state_change=_parse_required_dt(str(slot["last_state_change"])),
|
||||
)
|
||||
|
||||
|
||||
def _error_response(
|
||||
request: Request,
|
||||
status_code: int,
|
||||
code: str,
|
||||
message: str,
|
||||
retryable: bool = False,
|
||||
) -> NoReturn:
|
||||
request_id = getattr(request.state, "request_id", str(uuid.uuid4()))
|
||||
payload = ErrorResponse(
|
||||
error=ErrorDetail(code=code, message=message, retryable=retryable),
|
||||
request_id=request_id,
|
||||
)
|
||||
raise HTTPException(status_code=status_code, detail=payload.model_dump(mode="json"))
|
||||
|
||||
|
||||
def create_app(
|
||||
db: StateDB,
|
||||
config: AppConfig,
|
||||
clock: Clock,
|
||||
metrics: MetricsRegistry,
|
||||
runtime: RuntimeAdapter | None = None,
|
||||
haproxy: HAProxyRuntime | None = None,
|
||||
scheduler_running: Callable[[], bool] | None = None,
|
||||
reconciler_running: Callable[[], bool] | None = None,
|
||||
ready_check: Callable[[], bool] | None = None,
|
||||
reconcile_now: Callable[[], dict[str, object] | None] | None = None,
|
||||
) -> FastAPI:
|
||||
"""Create the FastAPI application."""
|
||||
app = FastAPI(title="nix-builder-autoscaler", version="0.1.0")
|
||||
|
||||
app.state.db = db
|
||||
app.state.config = config
|
||||
app.state.clock = clock
|
||||
app.state.metrics = metrics
|
||||
app.state.runtime = runtime
|
||||
app.state.haproxy = haproxy
|
||||
|
||||
@app.middleware("http")
|
||||
async def request_id_middleware(request: Request, call_next: Callable) -> Response:
|
||||
request.state.request_id = str(uuid.uuid4())
|
||||
response = await call_next(request)
|
||||
response.headers["x-request-id"] = request.state.request_id
|
||||
return response
|
||||
|
||||
@app.exception_handler(HTTPException)
|
||||
async def http_exception_handler(request: Request, exc: HTTPException) -> JSONResponse:
|
||||
detail = exc.detail
|
||||
if isinstance(detail, dict) and "error" in detail and "request_id" in detail:
|
||||
return JSONResponse(status_code=exc.status_code, content=detail)
|
||||
|
||||
request_id = getattr(request.state, "request_id", str(uuid.uuid4()))
|
||||
payload = ErrorResponse(
|
||||
error=ErrorDetail(
|
||||
code="http_error",
|
||||
message=str(detail) if detail else "Request failed",
|
||||
retryable=False,
|
||||
),
|
||||
request_id=request_id,
|
||||
)
|
||||
return JSONResponse(status_code=exc.status_code, content=payload.model_dump(mode="json"))
|
||||
|
||||
@app.post("/v1/reservations", response_model=ReservationResponse)
|
||||
def create_reservation(body: ReservationRequest) -> ReservationResponse:
|
||||
resv = db.create_reservation(
|
||||
body.system,
|
||||
body.reason,
|
||||
body.build_id,
|
||||
config.capacity.reservation_ttl_seconds,
|
||||
)
|
||||
return _resv_to_response(resv)
|
||||
|
||||
@app.get("/v1/reservations/{reservation_id}", response_model=ReservationResponse)
|
||||
def get_reservation(reservation_id: str, request: Request) -> ReservationResponse:
|
||||
resv = db.get_reservation(reservation_id)
|
||||
if resv is None:
|
||||
_error_response(request, 404, "not_found", "Reservation not found")
|
||||
return _resv_to_response(resv)
|
||||
|
||||
@app.post("/v1/reservations/{reservation_id}/release", response_model=ReservationResponse)
|
||||
def release_reservation(reservation_id: str, request: Request) -> ReservationResponse:
|
||||
resv = db.release_reservation(reservation_id)
|
||||
if resv is None:
|
||||
_error_response(request, 404, "not_found", "Reservation not found")
|
||||
return _resv_to_response(resv)
|
||||
|
||||
@app.get("/v1/reservations", response_model=list[ReservationResponse])
|
||||
def list_reservations(
|
||||
phase: ReservationPhase | None = None,
|
||||
) -> list[ReservationResponse]:
|
||||
reservations = db.list_reservations(phase)
|
||||
return [_resv_to_response(resv) for resv in reservations]
|
||||
|
||||
@app.get("/v1/slots", response_model=list[SlotInfo])
|
||||
def list_slots() -> list[SlotInfo]:
|
||||
slots = db.list_slots()
|
||||
return [_slot_to_info(slot) for slot in slots]
|
||||
|
||||
@app.get("/v1/state/summary", response_model=StateSummary)
|
||||
def state_summary() -> StateSummary:
|
||||
summary = db.get_state_summary()
|
||||
return StateSummary.model_validate(summary)
|
||||
|
||||
@app.post("/v1/hints/capacity")
|
||||
def capacity_hint(hint: CapacityHint) -> dict[str, str]:
|
||||
log.info(
|
||||
"capacity_hint",
|
||||
extra={
|
||||
"builder": hint.builder,
|
||||
"queued": hint.queued,
|
||||
"running": hint.running,
|
||||
"system": hint.system,
|
||||
"timestamp": hint.timestamp.isoformat(),
|
||||
},
|
||||
)
|
||||
return {"status": "accepted"}
|
||||
|
||||
@app.get("/health/live", response_model=HealthResponse)
|
||||
def health_live() -> HealthResponse:
|
||||
return HealthResponse(status="ok")
|
||||
|
||||
@app.get("/health/ready", response_model=HealthResponse)
|
||||
def health_ready() -> HealthResponse:
|
||||
if ready_check is not None and not ready_check():
|
||||
return JSONResponse( # type: ignore[return-value]
|
||||
status_code=503,
|
||||
content=HealthResponse(status="degraded").model_dump(mode="json"),
|
||||
)
|
||||
if scheduler_running is not None and not scheduler_running():
|
||||
return JSONResponse( # type: ignore[return-value]
|
||||
status_code=503,
|
||||
content=HealthResponse(status="degraded").model_dump(mode="json"),
|
||||
)
|
||||
if reconciler_running is not None and not reconciler_running():
|
||||
return JSONResponse( # type: ignore[return-value]
|
||||
status_code=503,
|
||||
content=HealthResponse(status="degraded").model_dump(mode="json"),
|
||||
)
|
||||
return HealthResponse(status="ok")
|
||||
|
||||
@app.get("/metrics")
|
||||
def metrics_endpoint() -> Response:
|
||||
return Response(content=metrics.render(), media_type="text/plain")
|
||||
|
||||
@app.post("/v1/admin/drain")
|
||||
def admin_drain(body: SlotAdminRequest, request: Request) -> dict[str, str]:
|
||||
slot = db.get_slot(body.slot_id)
|
||||
if slot is None:
|
||||
_error_response(request, 404, "not_found", "Slot not found")
|
||||
state = str(slot["state"])
|
||||
if state == SlotState.DRAINING.value or state == SlotState.TERMINATING.value:
|
||||
return {"status": "accepted", "slot_id": body.slot_id, "state": state}
|
||||
|
||||
allowed_states = {
|
||||
SlotState.READY.value,
|
||||
SlotState.BINDING.value,
|
||||
SlotState.BOOTING.value,
|
||||
SlotState.LAUNCHING.value,
|
||||
}
|
||||
if state not in allowed_states:
|
||||
_error_response(
|
||||
request,
|
||||
409,
|
||||
"invalid_state",
|
||||
f"Cannot drain slot from state {state}",
|
||||
)
|
||||
db.update_slot_state(body.slot_id, SlotState.DRAINING, interruption_pending=0)
|
||||
return {"status": "accepted", "slot_id": body.slot_id, "state": SlotState.DRAINING.value}
|
||||
|
||||
@app.post("/v1/admin/unquarantine")
|
||||
def admin_unquarantine(body: SlotAdminRequest, request: Request) -> dict[str, str]:
|
||||
slot = db.get_slot(body.slot_id)
|
||||
if slot is None:
|
||||
_error_response(request, 404, "not_found", "Slot not found")
|
||||
|
||||
state = str(slot["state"])
|
||||
if state != SlotState.ERROR.value:
|
||||
_error_response(
|
||||
request,
|
||||
409,
|
||||
"invalid_state",
|
||||
f"Cannot unquarantine slot from state {state}",
|
||||
)
|
||||
|
||||
db.update_slot_state(
|
||||
body.slot_id,
|
||||
SlotState.EMPTY,
|
||||
instance_id=None,
|
||||
instance_ip=None,
|
||||
instance_launch_time=None,
|
||||
lease_count=0,
|
||||
cooldown_until=None,
|
||||
interruption_pending=0,
|
||||
)
|
||||
return {"status": "accepted", "slot_id": body.slot_id, "state": SlotState.EMPTY.value}
|
||||
|
||||
@app.post("/v1/admin/reconcile-now")
|
||||
def admin_reconcile_now(request: Request) -> dict[str, object]:
|
||||
if reconcile_now is None:
|
||||
_error_response(
|
||||
request,
|
||||
503,
|
||||
"not_configured",
|
||||
"Reconcile trigger not configured",
|
||||
retryable=True,
|
||||
)
|
||||
try:
|
||||
result = reconcile_now()
|
||||
except Exception:
|
||||
log.exception("admin_reconcile_now_failed")
|
||||
_error_response(
|
||||
request,
|
||||
500,
|
||||
"reconcile_failed",
|
||||
"Reconcile tick failed",
|
||||
retryable=True,
|
||||
)
|
||||
|
||||
payload: dict[str, object] = {"status": "accepted"}
|
||||
if isinstance(result, dict):
|
||||
payload.update(result)
|
||||
return payload
|
||||
|
||||
return app
|
||||
0
agent/nix_builder_autoscaler/bootstrap/__init__.py
Normal file
0
agent/nix_builder_autoscaler/bootstrap/__init__.py
Normal file
71
agent/nix_builder_autoscaler/bootstrap/userdata.py
Normal file
71
agent/nix_builder_autoscaler/bootstrap/userdata.py
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
"""EC2 user-data template rendering for builder instance bootstrap.
|
||||
|
||||
The generated script follows the NixOS AMI pattern: write config files
|
||||
that existing systemd services (tailscale-autoconnect, nix-daemon) consume,
|
||||
rather than calling ``tailscale up`` directly.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import textwrap
|
||||
|
||||
|
||||
def render_userdata(slot_id: str, region: str, ssm_param: str = "/nix-builder/ts-authkey") -> str:
|
||||
"""Render a bash user-data script for builder instance bootstrap.
|
||||
|
||||
The returned string is a complete shell script. On NixOS AMIs the script
|
||||
is executed by ``amazon-init.service``. The caller (EC2Runtime) passes it
|
||||
to ``run_instances`` as ``UserData``; boto3 base64-encodes automatically.
|
||||
|
||||
Args:
|
||||
slot_id: Autoscaler slot identifier (used as Tailscale hostname suffix).
|
||||
region: AWS region for SSM parameter lookup.
|
||||
ssm_param: SSM parameter path containing the Tailscale auth key.
|
||||
"""
|
||||
return textwrap.dedent(f"""\
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SLOT_ID="{slot_id}"
|
||||
REGION="{region}"
|
||||
SSM_PARAM="{ssm_param}"
|
||||
|
||||
# --- Fetch Tailscale auth key from SSM Parameter Store ---
|
||||
mkdir -p /run/credentials
|
||||
TS_AUTHKEY=$(aws ssm get-parameter \\
|
||||
--region "$REGION" \\
|
||||
--with-decryption \\
|
||||
--name "$SSM_PARAM" \\
|
||||
--query 'Parameter.Value' \\
|
||||
--output text)
|
||||
printf '%s' "$TS_AUTHKEY" > /run/credentials/tailscale-auth-key
|
||||
chmod 600 /run/credentials/tailscale-auth-key
|
||||
|
||||
# --- Resolve instance identity from IMDSv2 for unique hostname ---
|
||||
IMDS_TOKEN=$(curl -fsS -X PUT "http://169.254.169.254/latest/api/token" \\
|
||||
-H "X-aws-ec2-metadata-token-ttl-seconds: 21600" || true)
|
||||
INSTANCE_ID=$(curl -fsS -H "X-aws-ec2-metadata-token: $IMDS_TOKEN" \\
|
||||
"http://169.254.169.254/latest/meta-data/instance-id" || true)
|
||||
if [ -z "$INSTANCE_ID" ]; then
|
||||
INSTANCE_ID="unknown"
|
||||
fi
|
||||
|
||||
# --- Write tailscale-autoconnect config ---
|
||||
mkdir -p /etc/tailscale
|
||||
cat > /etc/tailscale/autoconnect.conf <<TSCONF
|
||||
TS_AUTHKEY_FILE=/run/credentials/tailscale-auth-key
|
||||
TS_AUTHKEY_EPHEMERAL=true
|
||||
TS_AUTHKEY_PREAUTHORIZED=true
|
||||
TS_HOSTNAME=nix-builder-$SLOT_ID-$INSTANCE_ID
|
||||
TS_EXTRA_ARGS="--ssh --advertise-tags=tag:nix-builder"
|
||||
TSCONF
|
||||
|
||||
# --- Start/restart tailscale-autoconnect so it picks up the config ---
|
||||
systemctl restart tailscale-autoconnect.service || true
|
||||
|
||||
# --- Ensure nix-daemon is running ---
|
||||
systemctl start nix-daemon.service || true
|
||||
|
||||
# --- Signal readiness ---
|
||||
echo "ready" > /run/nix-builder-ready
|
||||
""")
|
||||
182
agent/nix_builder_autoscaler/cli.py
Normal file
182
agent/nix_builder_autoscaler/cli.py
Normal file
|
|
@ -0,0 +1,182 @@
|
|||
"""autoscalerctl CLI entry point."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import http.client
|
||||
import json
|
||||
import socket
|
||||
from collections.abc import Sequence
|
||||
from typing import Any
|
||||
|
||||
|
||||
class UnixHTTPConnection(http.client.HTTPConnection):
|
||||
"""HTTPConnection that dials a Unix domain socket."""
|
||||
|
||||
def __init__(self, socket_path: str, timeout: float = 5.0) -> None:
|
||||
super().__init__("localhost", timeout=timeout)
|
||||
self._socket_path = socket_path
|
||||
|
||||
def connect(self) -> None:
|
||||
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
self.sock.connect(self._socket_path)
|
||||
|
||||
|
||||
def _uds_request(
|
||||
socket_path: str,
|
||||
method: str,
|
||||
path: str,
|
||||
body: dict[str, Any] | None = None,
|
||||
) -> tuple[int, dict[str, Any] | list[dict[str, Any]] | str]:
|
||||
conn = UnixHTTPConnection(socket_path)
|
||||
headers = {"Host": "localhost", "Accept": "application/json"}
|
||||
payload: str | None = None
|
||||
if body is not None:
|
||||
payload = json.dumps(body)
|
||||
headers["Content-Type"] = "application/json"
|
||||
|
||||
try:
|
||||
conn.request(method, path, body=payload, headers=headers)
|
||||
resp = conn.getresponse()
|
||||
raw = resp.read()
|
||||
text = raw.decode() if raw else ""
|
||||
content_type = resp.getheader("Content-Type", "")
|
||||
if text and "application/json" in content_type:
|
||||
parsed = json.loads(text)
|
||||
if isinstance(parsed, dict | list):
|
||||
return resp.status, parsed
|
||||
return resp.status, text
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def _print_table(headers: Sequence[str], rows: Sequence[Sequence[str]]) -> None:
|
||||
widths = [len(h) for h in headers]
|
||||
for row in rows:
|
||||
for idx, cell in enumerate(row):
|
||||
widths[idx] = max(widths[idx], len(cell))
|
||||
|
||||
header_line = " ".join(h.ljust(widths[idx]) for idx, h in enumerate(headers))
|
||||
separator = " ".join("-" * widths[idx] for idx in range(len(headers)))
|
||||
print(header_line)
|
||||
print(separator)
|
||||
for row in rows:
|
||||
print(" ".join(cell.ljust(widths[idx]) for idx, cell in enumerate(row)))
|
||||
|
||||
|
||||
def _print_slots(data: list[dict[str, Any]]) -> None:
|
||||
rows: list[list[str]] = []
|
||||
for slot in data:
|
||||
rows.append(
|
||||
[
|
||||
str(slot.get("slot_id", "")),
|
||||
str(slot.get("state", "")),
|
||||
str(slot.get("instance_id") or "-"),
|
||||
str(slot.get("instance_ip") or "-"),
|
||||
str(slot.get("lease_count", 0)),
|
||||
]
|
||||
)
|
||||
_print_table(["slot_id", "state", "instance_id", "ip", "leases"], rows)
|
||||
|
||||
|
||||
def _print_reservations(data: list[dict[str, Any]]) -> None:
|
||||
rows: list[list[str]] = []
|
||||
for resv in data:
|
||||
rows.append(
|
||||
[
|
||||
str(resv.get("reservation_id", "")),
|
||||
str(resv.get("phase", "")),
|
||||
str(resv.get("system", "")),
|
||||
str(resv.get("slot") or "-"),
|
||||
str(resv.get("instance_id") or "-"),
|
||||
]
|
||||
)
|
||||
_print_table(["reservation_id", "phase", "system", "slot", "instance_id"], rows)
|
||||
|
||||
|
||||
def _parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(prog="autoscalerctl", description="Autoscaler CLI")
|
||||
parser.add_argument(
|
||||
"--socket",
|
||||
default="/run/nix-builder-autoscaler/daemon.sock",
|
||||
help="Daemon Unix socket path",
|
||||
)
|
||||
subparsers = parser.add_subparsers(dest="command")
|
||||
subparsers.add_parser("status", help="Show state summary")
|
||||
subparsers.add_parser("slots", help="List slots")
|
||||
subparsers.add_parser("reservations", help="List reservations")
|
||||
|
||||
parser_drain = subparsers.add_parser("drain", help="Drain a slot")
|
||||
parser_drain.add_argument("slot_id")
|
||||
parser_unq = subparsers.add_parser("unquarantine", help="Unquarantine a slot")
|
||||
parser_unq.add_argument("slot_id")
|
||||
subparsers.add_parser("reconcile-now", help="Trigger immediate reconcile tick")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def _print_error(data: object) -> None:
|
||||
if isinstance(data, dict | list):
|
||||
print(json.dumps(data, indent=2))
|
||||
else:
|
||||
print(str(data))
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Entry point for the autoscalerctl CLI."""
|
||||
args = _parse_args()
|
||||
if not args.command:
|
||||
raise SystemExit(1)
|
||||
|
||||
method = "GET"
|
||||
path = ""
|
||||
body: dict[str, Any] | None = None
|
||||
if args.command == "status":
|
||||
path = "/v1/state/summary"
|
||||
elif args.command == "slots":
|
||||
path = "/v1/slots"
|
||||
elif args.command == "reservations":
|
||||
path = "/v1/reservations"
|
||||
elif args.command == "drain":
|
||||
method = "POST"
|
||||
path = "/v1/admin/drain"
|
||||
body = {"slot_id": args.slot_id}
|
||||
elif args.command == "unquarantine":
|
||||
method = "POST"
|
||||
path = "/v1/admin/unquarantine"
|
||||
body = {"slot_id": args.slot_id}
|
||||
elif args.command == "reconcile-now":
|
||||
method = "POST"
|
||||
path = "/v1/admin/reconcile-now"
|
||||
else:
|
||||
raise SystemExit(1)
|
||||
|
||||
try:
|
||||
status, data = _uds_request(args.socket, method, path, body=body)
|
||||
except OSError as err:
|
||||
print(f"Error: cannot connect to daemon at {args.socket}")
|
||||
raise SystemExit(1) from err
|
||||
|
||||
if status < 200 or status >= 300:
|
||||
_print_error(data)
|
||||
raise SystemExit(1)
|
||||
|
||||
if args.command in {"status", "drain", "unquarantine", "reconcile-now"}:
|
||||
print(json.dumps(data, indent=2))
|
||||
elif args.command == "slots":
|
||||
if isinstance(data, list):
|
||||
_print_slots(data)
|
||||
else:
|
||||
_print_error(data)
|
||||
raise SystemExit(1)
|
||||
elif args.command == "reservations":
|
||||
if isinstance(data, list):
|
||||
_print_reservations(data)
|
||||
else:
|
||||
_print_error(data)
|
||||
raise SystemExit(1)
|
||||
|
||||
raise SystemExit(0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
155
agent/nix_builder_autoscaler/config.py
Normal file
155
agent/nix_builder_autoscaler/config.py
Normal file
|
|
@ -0,0 +1,155 @@
|
|||
"""Configuration loading from TOML with environment variable overrides."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import tomllib
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
@dataclass
|
||||
class ServerConfig:
|
||||
"""[server] section."""
|
||||
|
||||
socket_path: str = "/run/nix-builder-autoscaler/daemon.sock"
|
||||
log_level: str = "info"
|
||||
db_path: str = "/var/lib/nix-builder-autoscaler/state.db"
|
||||
|
||||
|
||||
@dataclass
|
||||
class AwsConfig:
|
||||
"""[aws] section."""
|
||||
|
||||
region: str = "us-east-1"
|
||||
launch_template_id: str = ""
|
||||
subnet_ids: list[str] = field(default_factory=list)
|
||||
security_group_ids: list[str] = field(default_factory=list)
|
||||
instance_profile_arn: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class HaproxyConfig:
|
||||
"""[haproxy] section."""
|
||||
|
||||
runtime_socket: str = "/run/haproxy/admin.sock"
|
||||
backend: str = "all"
|
||||
slot_prefix: str = "slot"
|
||||
slot_count: int = 8
|
||||
check_ready_up_count: int = 2
|
||||
|
||||
|
||||
@dataclass
|
||||
class SystemConfig:
|
||||
"""[[systems]] entry for per-architecture capacity policy."""
|
||||
|
||||
name: str = "x86_64-linux"
|
||||
min_slots: int = 0
|
||||
max_slots: int = 8
|
||||
target_warm_slots: int = 0
|
||||
max_leases_per_slot: int = 1
|
||||
launch_batch_size: int = 1
|
||||
scale_down_idle_seconds: int = 900
|
||||
|
||||
|
||||
@dataclass
|
||||
class CapacityConfig:
|
||||
"""[capacity] section — global defaults."""
|
||||
|
||||
default_system: str = "x86_64-linux"
|
||||
min_slots: int = 0
|
||||
max_slots: int = 8
|
||||
target_warm_slots: int = 0
|
||||
max_leases_per_slot: int = 1
|
||||
reservation_ttl_seconds: int = 1200
|
||||
idle_scale_down_seconds: int = 900
|
||||
drain_timeout_seconds: int = 120
|
||||
|
||||
|
||||
@dataclass
|
||||
class SecurityConfig:
|
||||
"""[security] section."""
|
||||
|
||||
socket_mode: str = "0660"
|
||||
socket_owner: str = "buildbot"
|
||||
socket_group: str = "buildbot"
|
||||
|
||||
|
||||
@dataclass
|
||||
class SchedulerConfig:
|
||||
"""[scheduler] section."""
|
||||
|
||||
tick_seconds: float = 3.0
|
||||
reconcile_seconds: float = 15.0
|
||||
|
||||
|
||||
@dataclass
|
||||
class AppConfig:
|
||||
"""Top-level application configuration."""
|
||||
|
||||
server: ServerConfig = field(default_factory=ServerConfig)
|
||||
aws: AwsConfig = field(default_factory=AwsConfig)
|
||||
haproxy: HaproxyConfig = field(default_factory=HaproxyConfig)
|
||||
capacity: CapacityConfig = field(default_factory=CapacityConfig)
|
||||
security: SecurityConfig = field(default_factory=SecurityConfig)
|
||||
scheduler: SchedulerConfig = field(default_factory=SchedulerConfig)
|
||||
systems: list[SystemConfig] = field(default_factory=list)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Environment variable overrides
|
||||
# ---------------------------------------------------------------------------
|
||||
# AUTOSCALER_TAILSCALE_API_TOKEN — Tailscale API token for IP discovery
|
||||
# AWS_REGION — override aws.region
|
||||
# AWS_ACCESS_KEY_ID — explicit AWS credential
|
||||
# AWS_SECRET_ACCESS_KEY — explicit AWS credential
|
||||
|
||||
|
||||
def _apply_env_overrides(cfg: AppConfig) -> None:
|
||||
"""Apply environment variable overrides for secrets and region."""
|
||||
region = os.environ.get("AWS_REGION")
|
||||
if region:
|
||||
cfg.aws.region = region
|
||||
|
||||
|
||||
def _build_dataclass(cls: type, data: dict) -> object: # noqa: ANN001
|
||||
"""Construct a dataclass from a dict, ignoring unknown keys."""
|
||||
valid = {f.name for f in cls.__dataclass_fields__.values()} # type: ignore[attr-defined]
|
||||
return cls(**{k: v for k, v in data.items() if k in valid})
|
||||
|
||||
|
||||
def load_config(path: Path) -> AppConfig:
|
||||
"""Load configuration from a TOML file.
|
||||
|
||||
Args:
|
||||
path: Path to the TOML config file.
|
||||
|
||||
Returns:
|
||||
Validated AppConfig instance.
|
||||
"""
|
||||
with open(path, "rb") as f:
|
||||
raw = tomllib.load(f)
|
||||
|
||||
cfg = AppConfig()
|
||||
|
||||
if "server" in raw:
|
||||
cfg.server = _build_dataclass(ServerConfig, raw["server"]) # type: ignore[assignment]
|
||||
if "aws" in raw:
|
||||
cfg.aws = _build_dataclass(AwsConfig, raw["aws"]) # type: ignore[assignment]
|
||||
if "haproxy" in raw:
|
||||
cfg.haproxy = _build_dataclass(HaproxyConfig, raw["haproxy"]) # type: ignore[assignment]
|
||||
if "capacity" in raw:
|
||||
cfg.capacity = _build_dataclass(CapacityConfig, raw["capacity"]) # type: ignore[assignment]
|
||||
if "security" in raw:
|
||||
cfg.security = _build_dataclass(SecurityConfig, raw["security"]) # type: ignore[assignment]
|
||||
if "scheduler" in raw:
|
||||
cfg.scheduler = _build_dataclass(SchedulerConfig, raw["scheduler"]) # type: ignore[assignment]
|
||||
|
||||
if "systems" in raw:
|
||||
cfg.systems = list[SystemConfig](
|
||||
_build_dataclass(SystemConfig, s) # type: ignore[list-item]
|
||||
for s in raw["systems"]
|
||||
)
|
||||
|
||||
_apply_env_overrides(cfg)
|
||||
return cfg
|
||||
46
agent/nix_builder_autoscaler/logging.py
Normal file
46
agent/nix_builder_autoscaler/logging.py
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
"""Structured JSON logging setup."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sys
|
||||
from datetime import UTC, datetime
|
||||
from typing import Any
|
||||
|
||||
|
||||
class JSONFormatter(logging.Formatter):
|
||||
"""Format log records as single-line JSON."""
|
||||
|
||||
EXTRA_FIELDS = ("slot_id", "reservation_id", "instance_id", "request_id")
|
||||
|
||||
def format(self, record: logging.LogRecord) -> str:
|
||||
"""Format a log record as JSON."""
|
||||
entry: dict[str, Any] = {
|
||||
"ts": datetime.now(UTC).isoformat(),
|
||||
"level": record.levelname,
|
||||
"logger": record.name,
|
||||
"message": record.getMessage(),
|
||||
}
|
||||
for field in self.EXTRA_FIELDS:
|
||||
val = getattr(record, field, None)
|
||||
if val is not None:
|
||||
entry[field] = val
|
||||
if record.exc_info and record.exc_info[1] is not None:
|
||||
entry["exception"] = self.formatException(record.exc_info)
|
||||
return json.dumps(entry, default=str)
|
||||
|
||||
|
||||
def setup_logging(level: str = "INFO") -> None:
|
||||
"""Configure the root logger with JSON output to stderr.
|
||||
|
||||
Args:
|
||||
level: Log level name (DEBUG, INFO, WARNING, ERROR).
|
||||
"""
|
||||
handler = logging.StreamHandler(sys.stderr)
|
||||
handler.setFormatter(JSONFormatter())
|
||||
|
||||
root = logging.getLogger()
|
||||
root.handlers.clear()
|
||||
root.addHandler(handler)
|
||||
root.setLevel(getattr(logging, level.upper(), logging.INFO))
|
||||
103
agent/nix_builder_autoscaler/metrics.py
Normal file
103
agent/nix_builder_autoscaler/metrics.py
Normal file
|
|
@ -0,0 +1,103 @@
|
|||
"""In-memory Prometheus metrics registry.
|
||||
|
||||
No prometheus_client dependency — formats text manually.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import threading
|
||||
from typing import Any
|
||||
|
||||
|
||||
def _labels_key(labels: dict[str, str]) -> tuple[tuple[str, str], ...]:
|
||||
return tuple(sorted(labels.items()))
|
||||
|
||||
|
||||
def _format_labels(labels: dict[str, str]) -> str:
|
||||
if not labels:
|
||||
return ""
|
||||
parts = ",".join(f'{k}="{v}"' for k, v in sorted(labels.items()))
|
||||
return "{" + parts + "}"
|
||||
|
||||
|
||||
class MetricsRegistry:
|
||||
"""Thread-safe in-memory metrics store with Prometheus text output."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._lock = threading.Lock()
|
||||
self._gauges: dict[str, dict[tuple[tuple[str, str], ...], float]] = {}
|
||||
self._counters: dict[str, dict[tuple[tuple[str, str], ...], float]] = {}
|
||||
self._histograms: dict[str, dict[tuple[tuple[str, str], ...], Any]] = {}
|
||||
|
||||
def gauge(self, name: str, labels: dict[str, str], value: float) -> None:
|
||||
"""Set a gauge value."""
|
||||
key = _labels_key(labels)
|
||||
with self._lock:
|
||||
if name not in self._gauges:
|
||||
self._gauges[name] = {}
|
||||
self._gauges[name][key] = value
|
||||
|
||||
def counter(self, name: str, labels: dict[str, str], increment: float = 1.0) -> None:
|
||||
"""Increment a counter."""
|
||||
key = _labels_key(labels)
|
||||
with self._lock:
|
||||
if name not in self._counters:
|
||||
self._counters[name] = {}
|
||||
self._counters[name][key] = self._counters[name].get(key, 0.0) + increment
|
||||
|
||||
def histogram_observe(self, name: str, labels: dict[str, str], value: float) -> None:
|
||||
"""Record a histogram observation.
|
||||
|
||||
Uses fixed buckets: 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 30, 60, 120, +Inf.
|
||||
"""
|
||||
key = _labels_key(labels)
|
||||
buckets = (0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0, 30.0, 60.0, 120.0)
|
||||
with self._lock:
|
||||
if name not in self._histograms:
|
||||
self._histograms[name] = {}
|
||||
if key not in self._histograms[name]:
|
||||
self._histograms[name][key] = {
|
||||
"labels": labels,
|
||||
"buckets": {b: 0 for b in buckets},
|
||||
"sum": 0.0,
|
||||
"count": 0,
|
||||
}
|
||||
entry = self._histograms[name][key]
|
||||
entry["sum"] += value
|
||||
entry["count"] += 1
|
||||
for b in buckets:
|
||||
if value <= b:
|
||||
entry["buckets"][b] += 1
|
||||
|
||||
def render(self) -> str:
|
||||
"""Render all metrics in Prometheus text exposition format."""
|
||||
lines: list[str] = []
|
||||
with self._lock:
|
||||
for name, series in sorted(self._gauges.items()):
|
||||
lines.append(f"# TYPE {name} gauge")
|
||||
for key, val in sorted(series.items()):
|
||||
labels = dict(key)
|
||||
lines.append(f"{name}{_format_labels(labels)} {val}")
|
||||
|
||||
for name, series in sorted(self._counters.items()):
|
||||
lines.append(f"# TYPE {name} counter")
|
||||
for key, val in sorted(series.items()):
|
||||
labels = dict(key)
|
||||
lines.append(f"{name}{_format_labels(labels)} {val}")
|
||||
|
||||
for name, series in sorted(self._histograms.items()):
|
||||
lines.append(f"# TYPE {name} histogram")
|
||||
for _key, entry in sorted(series.items()):
|
||||
labels = entry["labels"]
|
||||
cumulative = 0
|
||||
for b, count in sorted(entry["buckets"].items()):
|
||||
cumulative += count
|
||||
le_labels = {**labels, "le": str(b)}
|
||||
lines.append(f"{name}_bucket{_format_labels(le_labels)} {cumulative}")
|
||||
inf_labels = {**labels, "le": "+Inf"}
|
||||
lines.append(f"{name}_bucket{_format_labels(inf_labels)} {entry['count']}")
|
||||
lines.append(f"{name}_sum{_format_labels(labels)} {entry['sum']}")
|
||||
lines.append(f"{name}_count{_format_labels(labels)} {entry['count']}")
|
||||
|
||||
lines.append("")
|
||||
return "\n".join(lines)
|
||||
153
agent/nix_builder_autoscaler/models.py
Normal file
153
agent/nix_builder_autoscaler/models.py
Normal file
|
|
@ -0,0 +1,153 @@
|
|||
"""Data models for the autoscaler daemon."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime
|
||||
from enum import StrEnum
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class SlotState(StrEnum):
|
||||
"""Exhaustive slot states."""
|
||||
|
||||
EMPTY = "empty"
|
||||
LAUNCHING = "launching"
|
||||
BOOTING = "booting"
|
||||
BINDING = "binding"
|
||||
READY = "ready"
|
||||
DRAINING = "draining"
|
||||
TERMINATING = "terminating"
|
||||
ERROR = "error"
|
||||
|
||||
|
||||
class ReservationPhase(StrEnum):
|
||||
"""Exhaustive reservation phases."""
|
||||
|
||||
PENDING = "pending"
|
||||
READY = "ready"
|
||||
FAILED = "failed"
|
||||
RELEASED = "released"
|
||||
EXPIRED = "expired"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# API request models
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class ReservationRequest(BaseModel):
|
||||
"""POST /v1/reservations request body."""
|
||||
|
||||
system: str
|
||||
reason: str
|
||||
build_id: int | None = None
|
||||
|
||||
|
||||
class CapacityHint(BaseModel):
|
||||
"""POST /v1/hints/capacity request body."""
|
||||
|
||||
builder: str
|
||||
queued: int
|
||||
running: int
|
||||
system: str
|
||||
timestamp: datetime
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# API response models
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class ReservationResponse(BaseModel):
|
||||
"""Reservation representation returned by the API."""
|
||||
|
||||
reservation_id: str
|
||||
phase: ReservationPhase
|
||||
slot: str | None = None
|
||||
instance_id: str | None = None
|
||||
system: str
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
expires_at: datetime
|
||||
released_at: datetime | None = None
|
||||
|
||||
|
||||
class SlotInfo(BaseModel):
|
||||
"""Slot representation returned by the API."""
|
||||
|
||||
slot_id: str
|
||||
system: str
|
||||
state: SlotState
|
||||
instance_id: str | None = None
|
||||
instance_ip: str | None = None
|
||||
lease_count: int
|
||||
last_state_change: datetime
|
||||
|
||||
|
||||
class SlotsSummary(BaseModel):
|
||||
"""Aggregate slot counts by state."""
|
||||
|
||||
total: int = 0
|
||||
ready: int = 0
|
||||
launching: int = 0
|
||||
booting: int = 0
|
||||
binding: int = 0
|
||||
draining: int = 0
|
||||
terminating: int = 0
|
||||
empty: int = 0
|
||||
error: int = 0
|
||||
|
||||
|
||||
class ReservationsSummary(BaseModel):
|
||||
"""Aggregate reservation counts by phase."""
|
||||
|
||||
pending: int = 0
|
||||
ready: int = 0
|
||||
failed: int = 0
|
||||
|
||||
|
||||
class Ec2Summary(BaseModel):
|
||||
"""EC2 subsystem health."""
|
||||
|
||||
api_ok: bool = True
|
||||
last_reconcile_at: datetime | None = None
|
||||
|
||||
|
||||
class HaproxySummary(BaseModel):
|
||||
"""HAProxy subsystem health."""
|
||||
|
||||
socket_ok: bool = True
|
||||
last_stat_poll_at: datetime | None = None
|
||||
|
||||
|
||||
class StateSummary(BaseModel):
|
||||
"""GET /v1/state/summary response."""
|
||||
|
||||
slots: SlotsSummary = Field(default_factory=SlotsSummary)
|
||||
reservations: ReservationsSummary = Field(default_factory=ReservationsSummary)
|
||||
ec2: Ec2Summary = Field(default_factory=Ec2Summary)
|
||||
haproxy: HaproxySummary = Field(default_factory=HaproxySummary)
|
||||
|
||||
|
||||
class ErrorDetail(BaseModel):
|
||||
"""Structured error detail."""
|
||||
|
||||
code: str
|
||||
message: str
|
||||
retryable: bool = False
|
||||
details: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class ErrorResponse(BaseModel):
|
||||
"""Standard error response envelope."""
|
||||
|
||||
error: ErrorDetail
|
||||
request_id: str
|
||||
|
||||
|
||||
class HealthResponse(BaseModel):
|
||||
"""Health check response."""
|
||||
|
||||
status: str
|
||||
0
agent/nix_builder_autoscaler/providers/__init__.py
Normal file
0
agent/nix_builder_autoscaler/providers/__init__.py
Normal file
39
agent/nix_builder_autoscaler/providers/clock.py
Normal file
39
agent/nix_builder_autoscaler/providers/clock.py
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
"""Injectable clock abstraction for testability."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from typing import Protocol
|
||||
|
||||
|
||||
class Clock(Protocol):
|
||||
"""Clock protocol — provides current UTC time."""
|
||||
|
||||
def now(self) -> datetime: ...
|
||||
|
||||
|
||||
class SystemClock:
|
||||
"""Real wall-clock implementation."""
|
||||
|
||||
def now(self) -> datetime:
|
||||
"""Return the current UTC time."""
|
||||
return datetime.now(UTC)
|
||||
|
||||
|
||||
class FakeClock:
|
||||
"""Deterministic clock for tests."""
|
||||
|
||||
def __init__(self, start: datetime | None = None) -> None:
|
||||
self._now = start or datetime(2026, 1, 1, tzinfo=UTC)
|
||||
|
||||
def now(self) -> datetime:
|
||||
"""Return the fixed current time."""
|
||||
return self._now
|
||||
|
||||
def advance(self, seconds: float) -> None:
|
||||
"""Advance the clock by the given number of seconds."""
|
||||
self._now += timedelta(seconds=seconds)
|
||||
|
||||
def set(self, dt: datetime) -> None:
|
||||
"""Set the clock to an exact time."""
|
||||
self._now = dt
|
||||
114
agent/nix_builder_autoscaler/providers/haproxy.py
Normal file
114
agent/nix_builder_autoscaler/providers/haproxy.py
Normal file
|
|
@ -0,0 +1,114 @@
|
|||
"""HAProxy runtime socket adapter for managing builder slots."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import csv
|
||||
import io
|
||||
import socket
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
class HAProxyError(Exception):
|
||||
"""Error communicating with HAProxy runtime socket."""
|
||||
|
||||
|
||||
@dataclass
|
||||
class SlotHealth:
|
||||
"""Health status for a single HAProxy server slot."""
|
||||
|
||||
status: str
|
||||
scur: int
|
||||
qcur: int
|
||||
|
||||
|
||||
class HAProxyRuntime:
|
||||
"""HAProxy runtime CLI adapter via Unix socket.
|
||||
|
||||
Communicates with HAProxy using the admin socket text protocol.
|
||||
|
||||
Args:
|
||||
socket_path: Path to the HAProxy admin Unix socket.
|
||||
backend: HAProxy backend name (e.g. "all").
|
||||
slot_prefix: Server name prefix used for builder slots.
|
||||
"""
|
||||
|
||||
def __init__(self, socket_path: str, backend: str, slot_prefix: str) -> None:
|
||||
self._socket_path = socket_path
|
||||
self._backend = backend
|
||||
self._slot_prefix = slot_prefix
|
||||
|
||||
def set_slot_addr(self, slot_id: str, ip: str, port: int = 22) -> None:
|
||||
"""Update server address for a slot."""
|
||||
cmd = f"set server {self._backend}/{slot_id} addr {ip} port {port}"
|
||||
resp = self._run(cmd)
|
||||
self._check_response(resp, slot_id)
|
||||
|
||||
def enable_slot(self, slot_id: str) -> None:
|
||||
"""Enable a server slot."""
|
||||
cmd = f"enable server {self._backend}/{slot_id}"
|
||||
resp = self._run(cmd)
|
||||
self._check_response(resp, slot_id)
|
||||
|
||||
def disable_slot(self, slot_id: str) -> None:
|
||||
"""Disable a server slot."""
|
||||
cmd = f"disable server {self._backend}/{slot_id}"
|
||||
resp = self._run(cmd)
|
||||
self._check_response(resp, slot_id)
|
||||
|
||||
def slot_is_up(self, slot_id: str) -> bool:
|
||||
"""Return True when HAProxy health status is UP for slot."""
|
||||
health = self.read_slot_health()
|
||||
entry = health.get(slot_id)
|
||||
return entry is not None and entry.status == "UP"
|
||||
|
||||
def slot_session_count(self, slot_id: str) -> int:
|
||||
"""Return current active session count for slot."""
|
||||
health = self.read_slot_health()
|
||||
entry = health.get(slot_id)
|
||||
if entry is None:
|
||||
raise HAProxyError(f"Slot not found in HAProxy stats: {slot_id}")
|
||||
return entry.scur
|
||||
|
||||
def read_slot_health(self) -> dict[str, SlotHealth]:
|
||||
"""Return full stats snapshot for all slots in the backend."""
|
||||
raw = self._run("show stat")
|
||||
reader = csv.DictReader(io.StringIO(raw))
|
||||
result: dict[str, SlotHealth] = {}
|
||||
for row in reader:
|
||||
pxname = row.get("# pxname", "").strip()
|
||||
svname = row.get("svname", "").strip()
|
||||
if pxname == self._backend and svname.startswith(self._slot_prefix):
|
||||
result[svname] = SlotHealth(
|
||||
status=row.get("status", "").strip(),
|
||||
scur=int(row.get("scur", "0")),
|
||||
qcur=int(row.get("qcur", "0")),
|
||||
)
|
||||
return result
|
||||
|
||||
def _run(self, command: str) -> str:
|
||||
"""Send a command to the HAProxy admin socket and return the response."""
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
try:
|
||||
sock.connect(self._socket_path)
|
||||
sock.sendall((command + "\n").encode())
|
||||
sock.shutdown(socket.SHUT_WR)
|
||||
chunks: list[bytes] = []
|
||||
while True:
|
||||
chunk = sock.recv(4096)
|
||||
if not chunk:
|
||||
break
|
||||
chunks.append(chunk)
|
||||
return b"".join(chunks).decode()
|
||||
except FileNotFoundError as e:
|
||||
raise HAProxyError(f"HAProxy socket not found: {self._socket_path}") from e
|
||||
except ConnectionRefusedError as e:
|
||||
raise HAProxyError(f"Connection refused to HAProxy socket: {self._socket_path}") from e
|
||||
finally:
|
||||
sock.close()
|
||||
|
||||
@staticmethod
|
||||
def _check_response(response: str, slot_id: str) -> None:
|
||||
"""Raise HAProxyError if the response indicates an error."""
|
||||
stripped = response.strip()
|
||||
if stripped.startswith(("No such", "Unknown")):
|
||||
raise HAProxyError(f"HAProxy error for {slot_id}: {stripped}")
|
||||
330
agent/nix_builder_autoscaler/reconciler.py
Normal file
330
agent/nix_builder_autoscaler/reconciler.py
Normal file
|
|
@ -0,0 +1,330 @@
|
|||
"""Reconciler — advances slots through the state machine.
|
||||
|
||||
Each tick queries EC2 and HAProxy, then processes each slot according to
|
||||
its current state: launching→booting→binding→ready, with draining and
|
||||
terminating paths for teardown.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import contextlib
|
||||
import logging
|
||||
import time
|
||||
from datetime import datetime
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .models import SlotState
|
||||
from .providers.haproxy import HAProxyError
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .config import AppConfig
|
||||
from .metrics import MetricsRegistry
|
||||
from .providers.clock import Clock
|
||||
from .providers.haproxy import HAProxyRuntime
|
||||
from .runtime.base import RuntimeAdapter
|
||||
from .state_db import StateDB
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Reconciler:
|
||||
"""Advances slots through the state machine by polling EC2 and HAProxy.
|
||||
|
||||
Maintains binding health-check counters between ticks.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
db: StateDB,
|
||||
runtime: RuntimeAdapter,
|
||||
haproxy: HAProxyRuntime,
|
||||
config: AppConfig,
|
||||
clock: Clock,
|
||||
metrics: MetricsRegistry,
|
||||
) -> None:
|
||||
self._db = db
|
||||
self._runtime = runtime
|
||||
self._haproxy = haproxy
|
||||
self._config = config
|
||||
self._clock = clock
|
||||
self._metrics = metrics
|
||||
self._binding_up_counts: dict[str, int] = {}
|
||||
|
||||
def tick(self) -> None:
|
||||
"""Execute one reconciliation tick."""
|
||||
t0 = time.monotonic()
|
||||
|
||||
# 1. Query EC2
|
||||
try:
|
||||
managed = self._runtime.list_managed_instances()
|
||||
except Exception:
|
||||
log.exception("ec2_list_failed")
|
||||
managed = []
|
||||
ec2_by_slot: dict[str, dict] = {}
|
||||
for inst in managed:
|
||||
sid = inst.get("slot_id")
|
||||
if sid:
|
||||
ec2_by_slot[sid] = inst
|
||||
|
||||
# 2. Query HAProxy
|
||||
try:
|
||||
haproxy_health = self._haproxy_read_slot_health()
|
||||
except HAProxyError:
|
||||
log.warning("haproxy_stat_failed", exc_info=True)
|
||||
haproxy_health = {}
|
||||
|
||||
# 3. Process each slot
|
||||
all_slots = self._db.list_slots()
|
||||
for slot in all_slots:
|
||||
state = slot["state"]
|
||||
if state == SlotState.LAUNCHING.value:
|
||||
self._handle_launching(slot)
|
||||
elif state == SlotState.BOOTING.value:
|
||||
self._handle_booting(slot)
|
||||
elif state == SlotState.BINDING.value:
|
||||
self._handle_binding(slot, haproxy_health)
|
||||
elif state == SlotState.READY.value:
|
||||
self._handle_ready(slot, ec2_by_slot)
|
||||
elif state == SlotState.DRAINING.value:
|
||||
self._handle_draining(slot)
|
||||
elif state == SlotState.TERMINATING.value:
|
||||
self._handle_terminating(slot, ec2_by_slot)
|
||||
|
||||
# 4. Clean stale binding counters
|
||||
binding_ids = {s["slot_id"] for s in all_slots if s["state"] == SlotState.BINDING.value}
|
||||
stale = [k for k in self._binding_up_counts if k not in binding_ids]
|
||||
for k in stale:
|
||||
del self._binding_up_counts[k]
|
||||
|
||||
# 5. Emit metrics
|
||||
tick_duration = time.monotonic() - t0
|
||||
self._update_metrics(tick_duration)
|
||||
|
||||
def _handle_launching(self, slot: dict) -> None:
|
||||
"""Check if launching instance has reached running state."""
|
||||
instance_id = slot["instance_id"]
|
||||
if not instance_id:
|
||||
self._db.update_slot_state(slot["slot_id"], SlotState.ERROR)
|
||||
return
|
||||
|
||||
info = self._runtime.describe_instance(instance_id)
|
||||
ec2_state = info["state"]
|
||||
|
||||
if ec2_state == "running":
|
||||
self._db.update_slot_state(slot["slot_id"], SlotState.BOOTING)
|
||||
log.info("slot_booting", extra={"slot_id": slot["slot_id"]})
|
||||
elif ec2_state in ("terminated", "shutting-down"):
|
||||
self._db.update_slot_state(slot["slot_id"], SlotState.ERROR)
|
||||
log.warning(
|
||||
"slot_launch_terminated",
|
||||
extra={"slot_id": slot["slot_id"], "ec2_state": ec2_state},
|
||||
)
|
||||
|
||||
def _handle_booting(self, slot: dict) -> None:
|
||||
"""Check if booting instance has a Tailscale IP yet."""
|
||||
instance_id = slot["instance_id"]
|
||||
if not instance_id:
|
||||
self._db.update_slot_state(slot["slot_id"], SlotState.ERROR)
|
||||
return
|
||||
|
||||
info = self._runtime.describe_instance(instance_id)
|
||||
ec2_state = info["state"]
|
||||
|
||||
if ec2_state in ("terminated", "shutting-down"):
|
||||
self._db.update_slot_state(slot["slot_id"], SlotState.ERROR)
|
||||
log.warning(
|
||||
"slot_boot_terminated",
|
||||
extra={"slot_id": slot["slot_id"], "ec2_state": ec2_state},
|
||||
)
|
||||
return
|
||||
|
||||
tailscale_ip = info.get("tailscale_ip")
|
||||
if tailscale_ip is not None:
|
||||
self._db.update_slot_state(slot["slot_id"], SlotState.BINDING, instance_ip=tailscale_ip)
|
||||
try:
|
||||
self._haproxy_set_slot_addr(slot["slot_id"], tailscale_ip)
|
||||
self._haproxy_enable_slot(slot["slot_id"])
|
||||
except HAProxyError:
|
||||
log.warning(
|
||||
"haproxy_binding_setup_failed",
|
||||
extra={"slot_id": slot["slot_id"]},
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _handle_binding(self, slot: dict, haproxy_health: dict) -> None:
|
||||
"""Check HAProxy health to determine when slot is ready."""
|
||||
slot_id = slot["slot_id"]
|
||||
health = haproxy_health.get(slot_id)
|
||||
|
||||
if health is not None and health.status == "UP":
|
||||
count = self._binding_up_counts.get(slot_id, 0) + 1
|
||||
self._binding_up_counts[slot_id] = count
|
||||
if count >= self._config.haproxy.check_ready_up_count:
|
||||
self._db.update_slot_state(slot_id, SlotState.READY)
|
||||
self._binding_up_counts.pop(slot_id, None)
|
||||
log.info("slot_ready", extra={"slot_id": slot_id})
|
||||
else:
|
||||
self._binding_up_counts[slot_id] = 0
|
||||
# Retry HAProxy setup
|
||||
ip = slot.get("instance_ip")
|
||||
if ip:
|
||||
try:
|
||||
self._haproxy_set_slot_addr(slot_id, ip)
|
||||
self._haproxy_enable_slot(slot_id)
|
||||
except HAProxyError:
|
||||
pass
|
||||
|
||||
# Check if instance is still alive
|
||||
instance_id = slot.get("instance_id")
|
||||
if instance_id:
|
||||
info = self._runtime.describe_instance(instance_id)
|
||||
if info["state"] in ("terminated", "shutting-down"):
|
||||
self._db.update_slot_state(slot_id, SlotState.ERROR)
|
||||
self._binding_up_counts.pop(slot_id, None)
|
||||
log.warning(
|
||||
"slot_binding_terminated",
|
||||
extra={"slot_id": slot_id},
|
||||
)
|
||||
|
||||
def _handle_ready(self, slot: dict, ec2_by_slot: dict[str, dict]) -> None:
|
||||
"""Verify EC2 instance is still alive for ready slots."""
|
||||
slot_id = slot["slot_id"]
|
||||
ec2_info = ec2_by_slot.get(slot_id)
|
||||
|
||||
if ec2_info is None or ec2_info["state"] == "terminated":
|
||||
self._db.update_slot_state(slot_id, SlotState.ERROR, instance_id=None, instance_ip=None)
|
||||
log.warning("slot_ready_instance_gone", extra={"slot_id": slot_id})
|
||||
elif ec2_info["state"] == "shutting-down":
|
||||
self._db.update_slot_fields(slot_id, interruption_pending=1)
|
||||
log.info("slot_interruption_detected", extra={"slot_id": slot_id})
|
||||
|
||||
def _handle_draining(self, slot: dict) -> None:
|
||||
"""Disable HAProxy and terminate when drain conditions are met."""
|
||||
slot_id = slot["slot_id"]
|
||||
|
||||
# Disable HAProxy (idempotent)
|
||||
with contextlib.suppress(HAProxyError):
|
||||
self._haproxy_disable_slot(slot_id)
|
||||
|
||||
now = self._clock.now()
|
||||
last_change = datetime.fromisoformat(slot["last_state_change"])
|
||||
drain_duration = (now - last_change).total_seconds()
|
||||
|
||||
drain_timeout = self._config.capacity.drain_timeout_seconds
|
||||
if slot["lease_count"] == 0 or drain_duration >= drain_timeout:
|
||||
instance_id = slot.get("instance_id")
|
||||
if instance_id:
|
||||
try:
|
||||
self._runtime.terminate_instance(instance_id)
|
||||
self._metrics.counter(
|
||||
"autoscaler_ec2_terminate_total",
|
||||
{"result": "success"},
|
||||
1.0,
|
||||
)
|
||||
except Exception:
|
||||
self._metrics.counter(
|
||||
"autoscaler_ec2_terminate_total",
|
||||
{"result": "error"},
|
||||
1.0,
|
||||
)
|
||||
log.warning(
|
||||
"terminate_failed",
|
||||
extra={"slot_id": slot_id, "instance_id": instance_id},
|
||||
exc_info=True,
|
||||
)
|
||||
self._db.update_slot_state(slot_id, SlotState.TERMINATING)
|
||||
log.info(
|
||||
"slot_terminating",
|
||||
extra={"slot_id": slot_id, "drain_duration": drain_duration},
|
||||
)
|
||||
|
||||
def _handle_terminating(self, slot: dict, ec2_by_slot: dict[str, dict]) -> None:
|
||||
"""Wait for EC2 to confirm termination, then reset slot to empty."""
|
||||
slot_id = slot["slot_id"]
|
||||
instance_id = slot.get("instance_id")
|
||||
|
||||
if not instance_id:
|
||||
self._db.update_slot_state(
|
||||
slot_id, SlotState.EMPTY, instance_id=None, instance_ip=None, lease_count=0
|
||||
)
|
||||
log.info("slot_emptied", extra={"slot_id": slot_id})
|
||||
return
|
||||
|
||||
info = self._runtime.describe_instance(instance_id)
|
||||
if info["state"] == "terminated":
|
||||
self._db.update_slot_state(
|
||||
slot_id, SlotState.EMPTY, instance_id=None, instance_ip=None, lease_count=0
|
||||
)
|
||||
log.info("slot_emptied", extra={"slot_id": slot_id})
|
||||
|
||||
def _update_metrics(self, tick_duration: float) -> None:
|
||||
"""Emit reconciler metrics."""
|
||||
summary = self._db.get_state_summary()
|
||||
for state, count in summary["slots"].items():
|
||||
self._metrics.gauge("autoscaler_slots_total", {"state": state}, float(count))
|
||||
self._metrics.histogram_observe("autoscaler_reconcile_duration_seconds", {}, tick_duration)
|
||||
|
||||
def _haproxy_set_slot_addr(self, slot_id: str, ip: str) -> None:
|
||||
try:
|
||||
self._haproxy.set_slot_addr(slot_id, ip)
|
||||
self._metrics.counter(
|
||||
"autoscaler_haproxy_command_total",
|
||||
{"cmd": "set_slot_addr", "result": "success"},
|
||||
1.0,
|
||||
)
|
||||
except HAProxyError:
|
||||
self._metrics.counter(
|
||||
"autoscaler_haproxy_command_total",
|
||||
{"cmd": "set_slot_addr", "result": "error"},
|
||||
1.0,
|
||||
)
|
||||
raise
|
||||
|
||||
def _haproxy_enable_slot(self, slot_id: str) -> None:
|
||||
try:
|
||||
self._haproxy.enable_slot(slot_id)
|
||||
self._metrics.counter(
|
||||
"autoscaler_haproxy_command_total",
|
||||
{"cmd": "enable_slot", "result": "success"},
|
||||
1.0,
|
||||
)
|
||||
except HAProxyError:
|
||||
self._metrics.counter(
|
||||
"autoscaler_haproxy_command_total",
|
||||
{"cmd": "enable_slot", "result": "error"},
|
||||
1.0,
|
||||
)
|
||||
raise
|
||||
|
||||
def _haproxy_disable_slot(self, slot_id: str) -> None:
|
||||
try:
|
||||
self._haproxy.disable_slot(slot_id)
|
||||
self._metrics.counter(
|
||||
"autoscaler_haproxy_command_total",
|
||||
{"cmd": "disable_slot", "result": "success"},
|
||||
1.0,
|
||||
)
|
||||
except HAProxyError:
|
||||
self._metrics.counter(
|
||||
"autoscaler_haproxy_command_total",
|
||||
{"cmd": "disable_slot", "result": "error"},
|
||||
1.0,
|
||||
)
|
||||
raise
|
||||
|
||||
def _haproxy_read_slot_health(self) -> dict:
|
||||
try:
|
||||
health = self._haproxy.read_slot_health()
|
||||
self._metrics.counter(
|
||||
"autoscaler_haproxy_command_total",
|
||||
{"cmd": "show_stat", "result": "success"},
|
||||
1.0,
|
||||
)
|
||||
return health
|
||||
except HAProxyError:
|
||||
self._metrics.counter(
|
||||
"autoscaler_haproxy_command_total",
|
||||
{"cmd": "show_stat", "result": "error"},
|
||||
1.0,
|
||||
)
|
||||
raise
|
||||
0
agent/nix_builder_autoscaler/runtime/__init__.py
Normal file
0
agent/nix_builder_autoscaler/runtime/__init__.py
Normal file
43
agent/nix_builder_autoscaler/runtime/base.py
Normal file
43
agent/nix_builder_autoscaler/runtime/base.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
"""Abstract base class for runtime adapters."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
|
||||
class RuntimeError(Exception):
|
||||
"""Base error for runtime adapter failures.
|
||||
|
||||
Attributes:
|
||||
category: Normalized error category for retry/classification logic.
|
||||
"""
|
||||
|
||||
def __init__(self, message: str, category: str = "unknown") -> None:
|
||||
super().__init__(message)
|
||||
self.category = category
|
||||
|
||||
|
||||
class RuntimeAdapter(ABC):
|
||||
"""Interface for compute runtime backends (EC2, fake, etc.)."""
|
||||
|
||||
@abstractmethod
|
||||
def launch_spot(self, slot_id: str, user_data: str) -> str:
|
||||
"""Launch a spot instance for slot_id. Return instance_id."""
|
||||
|
||||
@abstractmethod
|
||||
def describe_instance(self, instance_id: str) -> dict:
|
||||
"""Return normalized instance info dict.
|
||||
|
||||
Keys: state, tailscale_ip (or None), launch_time.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def terminate_instance(self, instance_id: str) -> None:
|
||||
"""Terminate the instance."""
|
||||
|
||||
@abstractmethod
|
||||
def list_managed_instances(self) -> list[dict]:
|
||||
"""Return list of instances tagged ManagedBy=nix-builder-autoscaler.
|
||||
|
||||
Each entry has instance_id, state, slot_id (from AutoscalerSlot tag).
|
||||
"""
|
||||
291
agent/nix_builder_autoscaler/runtime/ec2.py
Normal file
291
agent/nix_builder_autoscaler/runtime/ec2.py
Normal file
|
|
@ -0,0 +1,291 @@
|
|||
"""EC2 runtime adapter for managing Spot instances."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import http.client
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import socket
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
import boto3
|
||||
from botocore.exceptions import ClientError
|
||||
|
||||
from ..config import AwsConfig
|
||||
from .base import RuntimeAdapter
|
||||
from .base import RuntimeError as RuntimeAdapterError
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
# EC2 ClientError code → normalized error category
|
||||
_ERROR_CATEGORIES: dict[str, str] = {
|
||||
"InsufficientInstanceCapacity": "capacity_unavailable",
|
||||
"SpotMaxPriceTooLow": "price_too_low",
|
||||
"RequestLimitExceeded": "throttled",
|
||||
}
|
||||
|
||||
_RETRYABLE_CODES: frozenset[str] = frozenset({"RequestLimitExceeded"})
|
||||
|
||||
|
||||
class _UnixSocketHTTPConnection(http.client.HTTPConnection):
|
||||
"""HTTP connection over a Unix domain socket."""
|
||||
|
||||
def __init__(self, socket_path: str, timeout: float = 1.0) -> None:
|
||||
super().__init__("local-tailscaled.sock", timeout=timeout)
|
||||
self._socket_path = socket_path
|
||||
|
||||
def connect(self) -> None:
|
||||
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
self.sock.connect(self._socket_path)
|
||||
|
||||
|
||||
class EC2Runtime(RuntimeAdapter):
|
||||
"""EC2 Spot instance runtime adapter.
|
||||
|
||||
Args:
|
||||
config: AWS configuration dataclass.
|
||||
environment: Environment tag value (e.g. ``"dev"``, ``"prod"``).
|
||||
_client: Optional pre-configured boto3 EC2 client (for testing).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
config: AwsConfig,
|
||||
environment: str = "dev",
|
||||
*,
|
||||
_client: Any = None,
|
||||
_tailscale_socket_path: str = "/run/tailscale/tailscaled.sock",
|
||||
) -> None:
|
||||
self._client: Any = _client or boto3.client("ec2", region_name=config.region)
|
||||
self._launch_template_id = config.launch_template_id
|
||||
self._subnet_ids = list(config.subnet_ids)
|
||||
self._security_group_ids = list(config.security_group_ids)
|
||||
self._instance_profile_arn = config.instance_profile_arn
|
||||
self._environment = environment
|
||||
self._subnet_index = 0
|
||||
self._tailscale_socket_path = _tailscale_socket_path
|
||||
|
||||
def launch_spot(self, slot_id: str, user_data: str) -> str:
|
||||
"""Launch a spot instance for *slot_id*. Return instance ID."""
|
||||
params: dict[str, Any] = {
|
||||
"MinCount": 1,
|
||||
"MaxCount": 1,
|
||||
"LaunchTemplate": {
|
||||
"LaunchTemplateId": self._launch_template_id,
|
||||
"Version": "$Latest",
|
||||
},
|
||||
"InstanceMarketOptions": {
|
||||
"MarketType": "spot",
|
||||
"SpotOptions": {
|
||||
"SpotInstanceType": "one-time",
|
||||
"InstanceInterruptionBehavior": "terminate",
|
||||
},
|
||||
},
|
||||
"UserData": user_data,
|
||||
"TagSpecifications": [
|
||||
{
|
||||
"ResourceType": "instance",
|
||||
"Tags": [
|
||||
{"Key": "Name", "Value": f"nix-builder-{slot_id}"},
|
||||
{"Key": "AutoscalerSlot", "Value": slot_id},
|
||||
{"Key": "ManagedBy", "Value": "nix-builder-autoscaler"},
|
||||
{"Key": "Service", "Value": "nix-builder"},
|
||||
{"Key": "Environment", "Value": self._environment},
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
if self._subnet_ids:
|
||||
subnet = self._subnet_ids[self._subnet_index % len(self._subnet_ids)]
|
||||
self._subnet_index += 1
|
||||
params["SubnetId"] = subnet
|
||||
|
||||
resp = self._call_with_backoff(self._client.run_instances, **params)
|
||||
return resp["Instances"][0]["InstanceId"]
|
||||
|
||||
def describe_instance(self, instance_id: str) -> dict:
|
||||
"""Return normalized instance info dict."""
|
||||
try:
|
||||
resp = self._call_with_backoff(
|
||||
self._client.describe_instances, InstanceIds=[instance_id]
|
||||
)
|
||||
except RuntimeAdapterError:
|
||||
return {"state": "terminated", "tailscale_ip": None, "launch_time": None}
|
||||
|
||||
reservations = resp.get("Reservations", [])
|
||||
if not reservations or not reservations[0].get("Instances"):
|
||||
return {"state": "terminated", "tailscale_ip": None, "launch_time": None}
|
||||
|
||||
inst = reservations[0]["Instances"][0]
|
||||
tags = inst.get("Tags", [])
|
||||
slot_id = self._get_tag(tags, "AutoscalerSlot")
|
||||
state = inst["State"]["Name"]
|
||||
tailscale_ip: str | None = None
|
||||
if state == "running" and slot_id:
|
||||
tailscale_ip = self._discover_tailscale_ip(slot_id, instance_id)
|
||||
|
||||
launch_time = inst.get("LaunchTime")
|
||||
return {
|
||||
"state": state,
|
||||
"tailscale_ip": tailscale_ip,
|
||||
"launch_time": launch_time.isoformat() if launch_time else None,
|
||||
}
|
||||
|
||||
def terminate_instance(self, instance_id: str) -> None:
|
||||
"""Terminate the instance."""
|
||||
self._call_with_backoff(self._client.terminate_instances, InstanceIds=[instance_id])
|
||||
|
||||
def list_managed_instances(self) -> list[dict]:
|
||||
"""Return list of managed instances."""
|
||||
resp = self._call_with_backoff(
|
||||
self._client.describe_instances,
|
||||
Filters=[
|
||||
{"Name": "tag:ManagedBy", "Values": ["nix-builder-autoscaler"]},
|
||||
{
|
||||
"Name": "instance-state-name",
|
||||
"Values": ["pending", "running", "shutting-down", "stopping"],
|
||||
},
|
||||
],
|
||||
)
|
||||
|
||||
result: list[dict] = []
|
||||
for reservation in resp.get("Reservations", []):
|
||||
for inst in reservation.get("Instances", []):
|
||||
tags = inst.get("Tags", [])
|
||||
result.append(
|
||||
{
|
||||
"instance_id": inst["InstanceId"],
|
||||
"state": inst["State"]["Name"],
|
||||
"slot_id": self._get_tag(tags, "AutoscalerSlot"),
|
||||
}
|
||||
)
|
||||
return result
|
||||
|
||||
def _call_with_backoff(self, fn: Any, *args: Any, max_retries: int = 3, **kwargs: Any) -> Any:
|
||||
"""Call *fn* with exponential backoff and full jitter on retryable errors."""
|
||||
delay = 0.5
|
||||
for attempt in range(max_retries + 1):
|
||||
try:
|
||||
return fn(*args, **kwargs)
|
||||
except ClientError as e:
|
||||
code = e.response["Error"]["Code"]
|
||||
if code in _RETRYABLE_CODES and attempt < max_retries:
|
||||
jitter = random.uniform(0, min(delay, 10.0))
|
||||
time.sleep(jitter)
|
||||
delay *= 2
|
||||
log.warning(
|
||||
"Retryable EC2 error (attempt %d/%d): %s",
|
||||
attempt + 1,
|
||||
max_retries,
|
||||
code,
|
||||
)
|
||||
continue
|
||||
category = _ERROR_CATEGORIES.get(code, "unknown")
|
||||
raise RuntimeAdapterError(str(e), category=category) from e
|
||||
|
||||
# Unreachable — loop always returns or raises on every path
|
||||
msg = "Retries exhausted"
|
||||
raise RuntimeAdapterError(msg, category="unknown")
|
||||
|
||||
def _discover_tailscale_ip(self, slot_id: str, instance_id: str) -> str | None:
|
||||
"""Resolve Tailscale IP for instance identity via local tailscaled LocalAPI."""
|
||||
status = self._read_tailscale_status()
|
||||
if status is None:
|
||||
return None
|
||||
|
||||
peers_obj = status.get("Peer")
|
||||
if not isinstance(peers_obj, dict):
|
||||
return None
|
||||
|
||||
online_candidates: list[tuple[str, str]] = []
|
||||
for peer in peers_obj.values():
|
||||
if not isinstance(peer, dict):
|
||||
continue
|
||||
if not self._peer_is_online(peer):
|
||||
continue
|
||||
hostname = self._peer_hostname(peer)
|
||||
if hostname is None:
|
||||
continue
|
||||
ip = self._peer_tailscale_ip(peer)
|
||||
if ip is None:
|
||||
continue
|
||||
online_candidates.append((hostname, ip))
|
||||
|
||||
identity = f"nix-builder-{slot_id}-{instance_id}".lower()
|
||||
identity_matches = [ip for host, ip in online_candidates if identity in host]
|
||||
if len(identity_matches) == 1:
|
||||
return identity_matches[0]
|
||||
if len(identity_matches) > 1:
|
||||
log.warning(
|
||||
"tailscale_identity_ambiguous",
|
||||
extra={"slot_id": slot_id, "instance_id": instance_id},
|
||||
)
|
||||
return None
|
||||
|
||||
slot_identity = f"nix-builder-{slot_id}".lower()
|
||||
slot_matches = [ip for host, ip in online_candidates if slot_identity in host]
|
||||
if len(slot_matches) == 1:
|
||||
return slot_matches[0]
|
||||
if len(slot_matches) > 1:
|
||||
log.warning("tailscale_slot_ambiguous", extra={"slot_id": slot_id})
|
||||
return None
|
||||
return None
|
||||
|
||||
def _read_tailscale_status(self) -> dict[str, Any] | None:
|
||||
"""Query local tailscaled LocalAPI status endpoint over Unix socket."""
|
||||
conn = _UnixSocketHTTPConnection(self._tailscale_socket_path, timeout=1.0)
|
||||
try:
|
||||
conn.request(
|
||||
"GET",
|
||||
"/localapi/v0/status",
|
||||
headers={"Host": "local-tailscaled.sock", "Accept": "application/json"},
|
||||
)
|
||||
response = conn.getresponse()
|
||||
if response.status != 200:
|
||||
return None
|
||||
payload = response.read()
|
||||
parsed = json.loads(payload.decode())
|
||||
if isinstance(parsed, dict):
|
||||
return parsed
|
||||
return None
|
||||
except (OSError, PermissionError, TimeoutError, json.JSONDecodeError, UnicodeDecodeError):
|
||||
return None
|
||||
except http.client.HTTPException:
|
||||
return None
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
@staticmethod
|
||||
def _peer_is_online(peer: dict[str, Any]) -> bool:
|
||||
return bool(peer.get("Online") or peer.get("Active"))
|
||||
|
||||
@staticmethod
|
||||
def _peer_hostname(peer: dict[str, Any]) -> str | None:
|
||||
host = peer.get("HostName") or peer.get("DNSName")
|
||||
if not isinstance(host, str) or not host:
|
||||
return None
|
||||
return host.strip(".").lower()
|
||||
|
||||
@staticmethod
|
||||
def _peer_tailscale_ip(peer: dict[str, Any]) -> str | None:
|
||||
ips = peer.get("TailscaleIPs")
|
||||
if not isinstance(ips, list):
|
||||
return None
|
||||
ipv4 = [ip for ip in ips if isinstance(ip, str) and "." in ip]
|
||||
if ipv4:
|
||||
return ipv4[0]
|
||||
for ip in ips:
|
||||
if isinstance(ip, str) and ip:
|
||||
return ip
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _get_tag(tags: list[dict[str, str]], key: str) -> str | None:
|
||||
"""Extract a tag value from an EC2 tag list."""
|
||||
for tag in tags:
|
||||
if tag.get("Key") == key:
|
||||
return tag.get("Value")
|
||||
return None
|
||||
122
agent/nix_builder_autoscaler/runtime/fake.py
Normal file
122
agent/nix_builder_autoscaler/runtime/fake.py
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
"""Fake runtime adapter for testing."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
|
||||
from .base import RuntimeAdapter
|
||||
from .base import RuntimeError as RuntimeAdapterError
|
||||
|
||||
|
||||
@dataclass
|
||||
class _FakeInstance:
|
||||
instance_id: str
|
||||
slot_id: str
|
||||
state: str = "pending"
|
||||
tailscale_ip: str | None = None
|
||||
launch_time: str = ""
|
||||
ticks_to_running: int = 0
|
||||
ticks_to_ip: int = 0
|
||||
interrupted: bool = False
|
||||
|
||||
|
||||
class FakeRuntime(RuntimeAdapter):
|
||||
"""In-memory runtime adapter for deterministic testing.
|
||||
|
||||
Args:
|
||||
launch_latency_ticks: Number of tick() calls before instance becomes running.
|
||||
ip_delay_ticks: Additional ticks after running before tailscale_ip appears.
|
||||
"""
|
||||
|
||||
def __init__(self, launch_latency_ticks: int = 2, ip_delay_ticks: int = 1) -> None:
|
||||
self._launch_latency = launch_latency_ticks
|
||||
self._ip_delay = ip_delay_ticks
|
||||
self._instances: dict[str, _FakeInstance] = {}
|
||||
self._launch_failures: set[str] = set()
|
||||
self._interruptions: set[str] = set()
|
||||
self._tick_count: int = 0
|
||||
self._next_ip_counter: int = 1
|
||||
|
||||
def launch_spot(self, slot_id: str, user_data: str) -> str:
|
||||
"""Launch a fake spot instance."""
|
||||
if slot_id in self._launch_failures:
|
||||
self._launch_failures.discard(slot_id)
|
||||
raise RuntimeAdapterError(
|
||||
f"Simulated launch failure for {slot_id}",
|
||||
category="capacity_unavailable",
|
||||
)
|
||||
|
||||
iid = f"i-fake-{uuid.uuid4().hex[:12]}"
|
||||
self._instances[iid] = _FakeInstance(
|
||||
instance_id=iid,
|
||||
slot_id=slot_id,
|
||||
state="pending",
|
||||
launch_time=f"2026-01-01T00:00:{self._tick_count:02d}Z",
|
||||
ticks_to_running=self._launch_latency,
|
||||
ticks_to_ip=self._launch_latency + self._ip_delay,
|
||||
)
|
||||
return iid
|
||||
|
||||
def describe_instance(self, instance_id: str) -> dict:
|
||||
"""Return normalized instance info."""
|
||||
inst = self._instances.get(instance_id)
|
||||
if inst is None:
|
||||
return {"state": "terminated", "tailscale_ip": None, "launch_time": None}
|
||||
|
||||
if instance_id in self._interruptions:
|
||||
self._interruptions.discard(instance_id)
|
||||
inst.state = "terminated"
|
||||
inst.interrupted = True
|
||||
|
||||
return {
|
||||
"state": inst.state,
|
||||
"tailscale_ip": inst.tailscale_ip,
|
||||
"launch_time": inst.launch_time,
|
||||
}
|
||||
|
||||
def terminate_instance(self, instance_id: str) -> None:
|
||||
"""Terminate a fake instance."""
|
||||
inst = self._instances.get(instance_id)
|
||||
if inst is not None:
|
||||
inst.state = "terminated"
|
||||
|
||||
def list_managed_instances(self) -> list[dict]:
|
||||
"""List all non-terminated fake instances."""
|
||||
result: list[dict] = []
|
||||
for inst in self._instances.values():
|
||||
if inst.state != "terminated":
|
||||
result.append(
|
||||
{
|
||||
"instance_id": inst.instance_id,
|
||||
"state": inst.state,
|
||||
"slot_id": inst.slot_id,
|
||||
}
|
||||
)
|
||||
return result
|
||||
|
||||
# -- Test helpers -------------------------------------------------------
|
||||
|
||||
def tick(self) -> None:
|
||||
"""Advance internal tick counter and progress instance states."""
|
||||
self._tick_count += 1
|
||||
for inst in self._instances.values():
|
||||
if inst.state == "terminated":
|
||||
continue
|
||||
if inst.state == "pending" and self._tick_count >= inst.ticks_to_running:
|
||||
inst.state = "running"
|
||||
if (
|
||||
inst.state == "running"
|
||||
and inst.tailscale_ip is None
|
||||
and self._tick_count >= inst.ticks_to_ip
|
||||
):
|
||||
inst.tailscale_ip = f"100.64.0.{self._next_ip_counter}"
|
||||
self._next_ip_counter += 1
|
||||
|
||||
def inject_launch_failure(self, slot_id: str) -> None:
|
||||
"""Make the next launch_spot call for this slot_id raise an error."""
|
||||
self._launch_failures.add(slot_id)
|
||||
|
||||
def inject_interruption(self, instance_id: str) -> None:
|
||||
"""Make the next describe_instance call for this instance return terminated."""
|
||||
self._interruptions.add(instance_id)
|
||||
266
agent/nix_builder_autoscaler/scheduler.py
Normal file
266
agent/nix_builder_autoscaler/scheduler.py
Normal file
|
|
@ -0,0 +1,266 @@
|
|||
"""Scheduler — stateless scheduling tick for the autoscaler.
|
||||
|
||||
Each tick: expire reservations, handle interruptions, assign pending
|
||||
reservations to ready slots, launch new capacity, maintain warm pool
|
||||
and min-slots, check idle scale-down, and emit metrics.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import time
|
||||
from datetime import datetime
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .bootstrap.userdata import render_userdata
|
||||
from .models import SlotState
|
||||
from .runtime.base import RuntimeError as RuntimeAdapterError
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .config import AppConfig
|
||||
from .metrics import MetricsRegistry
|
||||
from .providers.clock import Clock
|
||||
from .runtime.base import RuntimeAdapter
|
||||
from .state_db import StateDB
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def scheduling_tick(
|
||||
db: StateDB,
|
||||
runtime: RuntimeAdapter,
|
||||
config: AppConfig,
|
||||
clock: Clock,
|
||||
metrics: MetricsRegistry,
|
||||
) -> None:
|
||||
"""Execute one scheduling tick.
|
||||
|
||||
All dependencies are passed as arguments — no global state.
|
||||
"""
|
||||
t0 = time.monotonic()
|
||||
|
||||
# 1. Expire old reservations
|
||||
expired = db.expire_reservations(clock.now())
|
||||
if expired:
|
||||
log.info("expired_reservations", extra={"count": len(expired), "ids": expired})
|
||||
|
||||
# 2. Handle interruption-pending slots
|
||||
_handle_interruptions(db)
|
||||
|
||||
# 3. Assign pending reservations to ready slots
|
||||
_assign_reservations(db, config)
|
||||
|
||||
# 4. Launch new capacity for unmet demand
|
||||
_launch_for_unmet_demand(db, runtime, config, metrics)
|
||||
|
||||
# 5. Ensure minimum slots and warm pool
|
||||
_ensure_min_and_warm(db, runtime, config, metrics)
|
||||
|
||||
# 6. Check scale-down for idle slots
|
||||
_check_idle_scale_down(db, config, clock)
|
||||
|
||||
# 7. Emit metrics
|
||||
tick_duration = time.monotonic() - t0
|
||||
_update_metrics(db, metrics, tick_duration)
|
||||
|
||||
|
||||
def _handle_interruptions(db: StateDB) -> None:
|
||||
"""Move ready slots with interruption_pending to draining."""
|
||||
ready_slots = db.list_slots(SlotState.READY)
|
||||
for slot in ready_slots:
|
||||
if slot["interruption_pending"]:
|
||||
db.update_slot_state(slot["slot_id"], SlotState.DRAINING, interruption_pending=0)
|
||||
log.info(
|
||||
"interruption_drain",
|
||||
extra={"slot_id": slot["slot_id"]},
|
||||
)
|
||||
|
||||
|
||||
def _assign_reservations(db: StateDB, config: AppConfig) -> None:
|
||||
"""Assign pending reservations to ready slots with capacity."""
|
||||
from .models import ReservationPhase
|
||||
|
||||
pending = db.list_reservations(ReservationPhase.PENDING)
|
||||
if not pending:
|
||||
return
|
||||
|
||||
ready_slots = db.list_slots(SlotState.READY)
|
||||
if not ready_slots:
|
||||
return
|
||||
|
||||
max_leases = config.capacity.max_leases_per_slot
|
||||
# Track in-memory capacity to prevent double-assignment within the same tick
|
||||
capacity_map: dict[str, int] = {s["slot_id"]: s["lease_count"] for s in ready_slots}
|
||||
|
||||
for resv in pending:
|
||||
system = resv["system"]
|
||||
slot = _find_assignable_slot(ready_slots, system, max_leases, capacity_map)
|
||||
if slot is None:
|
||||
continue
|
||||
db.assign_reservation(resv["reservation_id"], slot["slot_id"], slot["instance_id"])
|
||||
capacity_map[slot["slot_id"]] += 1
|
||||
log.info(
|
||||
"reservation_assigned",
|
||||
extra={
|
||||
"reservation_id": resv["reservation_id"],
|
||||
"slot_id": slot["slot_id"],
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def _find_assignable_slot(
|
||||
ready_slots: list[dict],
|
||||
system: str,
|
||||
max_leases: int,
|
||||
capacity_map: dict[str, int],
|
||||
) -> dict | None:
|
||||
"""Return first ready slot for system with remaining capacity, or None."""
|
||||
for slot in ready_slots:
|
||||
if slot["system"] != system:
|
||||
continue
|
||||
sid = slot["slot_id"]
|
||||
current: int = capacity_map[sid] if sid in capacity_map else slot["lease_count"]
|
||||
if current < max_leases:
|
||||
return slot
|
||||
return None
|
||||
|
||||
|
||||
def _count_active_slots(db: StateDB) -> int:
|
||||
"""Count slots NOT in empty or error states."""
|
||||
all_slots = db.list_slots()
|
||||
return sum(
|
||||
1 for s in all_slots if s["state"] not in (SlotState.EMPTY.value, SlotState.ERROR.value)
|
||||
)
|
||||
|
||||
|
||||
def _launch_for_unmet_demand(
|
||||
db: StateDB,
|
||||
runtime: RuntimeAdapter,
|
||||
config: AppConfig,
|
||||
metrics: MetricsRegistry,
|
||||
) -> None:
|
||||
"""Launch new capacity for pending reservations that couldn't be assigned."""
|
||||
from .models import ReservationPhase
|
||||
|
||||
pending = db.list_reservations(ReservationPhase.PENDING)
|
||||
if not pending:
|
||||
return
|
||||
|
||||
active = _count_active_slots(db)
|
||||
if active >= config.capacity.max_slots:
|
||||
return
|
||||
|
||||
empty_slots = db.list_slots(SlotState.EMPTY)
|
||||
if not empty_slots:
|
||||
return
|
||||
|
||||
for launched, slot in enumerate(empty_slots):
|
||||
if launched >= len(pending):
|
||||
break
|
||||
if active + launched >= config.capacity.max_slots:
|
||||
break
|
||||
_launch_slot(db, runtime, config, metrics, slot)
|
||||
|
||||
|
||||
def _ensure_min_and_warm(
|
||||
db: StateDB,
|
||||
runtime: RuntimeAdapter,
|
||||
config: AppConfig,
|
||||
metrics: MetricsRegistry,
|
||||
) -> None:
|
||||
"""Ensure minimum slots and warm pool targets are met."""
|
||||
active = _count_active_slots(db)
|
||||
|
||||
# Ensure min_slots
|
||||
if active < config.capacity.min_slots:
|
||||
needed = config.capacity.min_slots - active
|
||||
empty_slots = db.list_slots(SlotState.EMPTY)
|
||||
launched = 0
|
||||
for slot in empty_slots:
|
||||
if launched >= needed:
|
||||
break
|
||||
if active + launched >= config.capacity.max_slots:
|
||||
break
|
||||
_launch_slot(db, runtime, config, metrics, slot)
|
||||
launched += 1
|
||||
active += launched
|
||||
|
||||
# Ensure warm pool
|
||||
if config.capacity.target_warm_slots > 0:
|
||||
ready_idle = sum(1 for s in db.list_slots(SlotState.READY) if s["lease_count"] == 0)
|
||||
pending_warm = (
|
||||
len(db.list_slots(SlotState.LAUNCHING))
|
||||
+ len(db.list_slots(SlotState.BOOTING))
|
||||
+ len(db.list_slots(SlotState.BINDING))
|
||||
)
|
||||
warm_total = ready_idle + pending_warm
|
||||
if warm_total < config.capacity.target_warm_slots:
|
||||
needed = config.capacity.target_warm_slots - warm_total
|
||||
empty_slots = db.list_slots(SlotState.EMPTY)
|
||||
launched = 0
|
||||
for slot in empty_slots:
|
||||
if launched >= needed:
|
||||
break
|
||||
if active + launched >= config.capacity.max_slots:
|
||||
break
|
||||
_launch_slot(db, runtime, config, metrics, slot)
|
||||
launched += 1
|
||||
|
||||
|
||||
def _launch_slot(
|
||||
db: StateDB,
|
||||
runtime: RuntimeAdapter,
|
||||
config: AppConfig,
|
||||
metrics: MetricsRegistry,
|
||||
slot: dict,
|
||||
) -> None:
|
||||
"""Launch a single slot. Transition to LAUNCHING on success, ERROR on failure."""
|
||||
slot_id = slot["slot_id"]
|
||||
user_data = render_userdata(slot_id, config.aws.region)
|
||||
try:
|
||||
instance_id = runtime.launch_spot(slot_id, user_data)
|
||||
metrics.counter("autoscaler_ec2_launch_total", {"result": "success"}, 1.0)
|
||||
db.update_slot_state(slot_id, SlotState.LAUNCHING, instance_id=instance_id)
|
||||
log.info("slot_launched", extra={"slot_id": slot_id, "instance_id": instance_id})
|
||||
except RuntimeAdapterError as exc:
|
||||
metrics.counter("autoscaler_ec2_launch_total", {"result": exc.category}, 1.0)
|
||||
db.update_slot_state(slot_id, SlotState.ERROR)
|
||||
log.warning(
|
||||
"slot_launch_failed",
|
||||
extra={"slot_id": slot_id, "error": str(exc), "category": exc.category},
|
||||
)
|
||||
|
||||
|
||||
def _check_idle_scale_down(db: StateDB, config: AppConfig, clock: Clock) -> None:
|
||||
"""Move idle ready slots to draining when idle threshold exceeded."""
|
||||
ready_slots = db.list_slots(SlotState.READY)
|
||||
now = clock.now()
|
||||
active = _count_active_slots(db)
|
||||
|
||||
for slot in ready_slots:
|
||||
if slot["lease_count"] != 0:
|
||||
continue
|
||||
last_change = datetime.fromisoformat(slot["last_state_change"])
|
||||
idle_seconds = (now - last_change).total_seconds()
|
||||
if idle_seconds > config.capacity.idle_scale_down_seconds:
|
||||
if active <= config.capacity.min_slots:
|
||||
continue
|
||||
db.update_slot_state(slot["slot_id"], SlotState.DRAINING)
|
||||
active -= 1
|
||||
log.info(
|
||||
"idle_scale_down",
|
||||
extra={"slot_id": slot["slot_id"], "idle_seconds": idle_seconds},
|
||||
)
|
||||
|
||||
|
||||
def _update_metrics(db: StateDB, metrics: MetricsRegistry, tick_duration: float) -> None:
|
||||
"""Refresh all gauge/counter/histogram values."""
|
||||
summary = db.get_state_summary()
|
||||
|
||||
for state, count in summary["slots"].items():
|
||||
metrics.gauge("autoscaler_slots_total", {"state": state}, float(count))
|
||||
|
||||
for phase, count in summary["reservations"].items():
|
||||
metrics.gauge("autoscaler_reservations_total", {"phase": phase}, float(count))
|
||||
|
||||
metrics.histogram_observe("autoscaler_scheduler_tick_duration_seconds", {}, tick_duration)
|
||||
468
agent/nix_builder_autoscaler/state_db.py
Normal file
468
agent/nix_builder_autoscaler/state_db.py
Normal file
|
|
@ -0,0 +1,468 @@
|
|||
"""SQLite state persistence layer.
|
||||
|
||||
All write operations use BEGIN IMMEDIATE transactions for crash safety.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sqlite3
|
||||
import threading
|
||||
import uuid
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .models import ReservationPhase, SlotState
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .providers.clock import Clock
|
||||
|
||||
_SCHEMA = """
|
||||
CREATE TABLE IF NOT EXISTS slots (
|
||||
slot_id TEXT PRIMARY KEY,
|
||||
system TEXT NOT NULL,
|
||||
state TEXT NOT NULL,
|
||||
instance_id TEXT,
|
||||
instance_ip TEXT,
|
||||
instance_launch_time TEXT,
|
||||
bound_backend TEXT NOT NULL,
|
||||
lease_count INTEGER NOT NULL DEFAULT 0,
|
||||
last_state_change TEXT NOT NULL,
|
||||
cooldown_until TEXT,
|
||||
interruption_pending INTEGER NOT NULL DEFAULT 0
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS reservations (
|
||||
reservation_id TEXT PRIMARY KEY,
|
||||
system TEXT NOT NULL,
|
||||
phase TEXT NOT NULL,
|
||||
slot_id TEXT,
|
||||
instance_id TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
updated_at TEXT NOT NULL,
|
||||
expires_at TEXT NOT NULL,
|
||||
released_at TEXT,
|
||||
reason TEXT,
|
||||
build_id INTEGER
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS events (
|
||||
event_id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
ts TEXT NOT NULL,
|
||||
kind TEXT NOT NULL,
|
||||
payload_json TEXT NOT NULL
|
||||
);
|
||||
"""
|
||||
|
||||
|
||||
def _now_iso(clock: Clock | None = None) -> str:
|
||||
if clock is not None:
|
||||
return clock.now().isoformat()
|
||||
return datetime.now(UTC).isoformat()
|
||||
|
||||
|
||||
def _row_to_dict(cursor: sqlite3.Cursor, row: tuple) -> dict: # type: ignore[type-arg]
|
||||
"""Convert a sqlite3 row to a dict using column names."""
|
||||
cols = [d[0] for d in cursor.description]
|
||||
return dict(zip(cols, row, strict=False))
|
||||
|
||||
|
||||
class StateDB:
|
||||
"""SQLite-backed state store for slots, reservations, and events."""
|
||||
|
||||
def __init__(self, db_path: str | Path = ":memory:", clock: Clock | None = None) -> None:
|
||||
self._conn = sqlite3.connect(str(db_path), check_same_thread=False)
|
||||
self._conn.execute("PRAGMA journal_mode=WAL")
|
||||
self._conn.execute("PRAGMA busy_timeout=5000")
|
||||
self._clock = clock
|
||||
self._lock = threading.RLock()
|
||||
|
||||
def init_schema(self) -> None:
|
||||
"""Create tables if they don't exist."""
|
||||
with self._lock:
|
||||
self._conn.executescript(_SCHEMA)
|
||||
|
||||
def init_slots(self, slot_prefix: str, slot_count: int, system: str, backend: str) -> None:
|
||||
"""Ensure all expected slots exist, creating missing ones as empty."""
|
||||
with self._lock:
|
||||
now = _now_iso(self._clock)
|
||||
for i in range(1, slot_count + 1):
|
||||
slot_id = f"{slot_prefix}{i:03d}"
|
||||
bound = f"{backend}/{slot_id}"
|
||||
self._conn.execute(
|
||||
"""INSERT OR IGNORE INTO slots
|
||||
(slot_id, system, state, bound_backend, lease_count, last_state_change)
|
||||
VALUES (?, ?, ?, ?, 0, ?)""",
|
||||
(slot_id, system, SlotState.EMPTY.value, bound, now),
|
||||
)
|
||||
self._conn.commit()
|
||||
|
||||
# -- Slot operations ----------------------------------------------------
|
||||
|
||||
def get_slot(self, slot_id: str) -> dict | None:
|
||||
"""Return a slot row as dict, or None."""
|
||||
with self._lock:
|
||||
cur = self._conn.execute("SELECT * FROM slots WHERE slot_id = ?", (slot_id,))
|
||||
row = cur.fetchone()
|
||||
if row is None:
|
||||
return None
|
||||
return _row_to_dict(cur, row)
|
||||
|
||||
def list_slots(self, state: SlotState | None = None) -> list[dict]:
|
||||
"""List slots, optionally filtered by state."""
|
||||
with self._lock:
|
||||
if state is not None:
|
||||
cur = self._conn.execute(
|
||||
"SELECT * FROM slots WHERE state = ? ORDER BY slot_id", (state.value,)
|
||||
)
|
||||
else:
|
||||
cur = self._conn.execute("SELECT * FROM slots ORDER BY slot_id")
|
||||
return [_row_to_dict(cur, row) for row in cur.fetchall()]
|
||||
|
||||
def update_slot_state(self, slot_id: str, new_state: SlotState, **fields: object) -> None:
|
||||
"""Atomically transition a slot to a new state and record an event.
|
||||
|
||||
Additional fields (instance_id, instance_ip, etc.) can be passed as kwargs.
|
||||
"""
|
||||
with self._lock:
|
||||
now = _now_iso(self._clock)
|
||||
set_parts = ["state = ?", "last_state_change = ?"]
|
||||
params: list[object] = [new_state.value, now]
|
||||
|
||||
allowed = {
|
||||
"instance_id",
|
||||
"instance_ip",
|
||||
"instance_launch_time",
|
||||
"lease_count",
|
||||
"cooldown_until",
|
||||
"interruption_pending",
|
||||
}
|
||||
for k, v in fields.items():
|
||||
if k not in allowed:
|
||||
msg = f"Unknown slot field: {k}"
|
||||
raise ValueError(msg)
|
||||
set_parts.append(f"{k} = ?")
|
||||
params.append(v)
|
||||
|
||||
params.append(slot_id)
|
||||
sql = f"UPDATE slots SET {', '.join(set_parts)} WHERE slot_id = ?"
|
||||
|
||||
self._conn.execute("BEGIN IMMEDIATE")
|
||||
try:
|
||||
self._conn.execute(sql, params)
|
||||
self._record_event_inner(
|
||||
"slot_state_change",
|
||||
{"slot_id": slot_id, "new_state": new_state.value, **fields},
|
||||
)
|
||||
self._conn.execute("COMMIT")
|
||||
except Exception:
|
||||
self._conn.execute("ROLLBACK")
|
||||
raise
|
||||
|
||||
def update_slot_fields(self, slot_id: str, **fields: object) -> None:
|
||||
"""Update specific slot columns without changing state or last_state_change.
|
||||
|
||||
Uses BEGIN IMMEDIATE. Allowed fields: instance_id, instance_ip,
|
||||
instance_launch_time, lease_count, cooldown_until, interruption_pending.
|
||||
"""
|
||||
with self._lock:
|
||||
allowed = {
|
||||
"instance_id",
|
||||
"instance_ip",
|
||||
"instance_launch_time",
|
||||
"lease_count",
|
||||
"cooldown_until",
|
||||
"interruption_pending",
|
||||
}
|
||||
if not fields:
|
||||
return
|
||||
|
||||
set_parts: list[str] = []
|
||||
params: list[object] = []
|
||||
for k, v in fields.items():
|
||||
if k not in allowed:
|
||||
msg = f"Unknown slot field: {k}"
|
||||
raise ValueError(msg)
|
||||
set_parts.append(f"{k} = ?")
|
||||
params.append(v)
|
||||
|
||||
params.append(slot_id)
|
||||
sql = f"UPDATE slots SET {', '.join(set_parts)} WHERE slot_id = ?"
|
||||
|
||||
self._conn.execute("BEGIN IMMEDIATE")
|
||||
try:
|
||||
self._conn.execute(sql, params)
|
||||
self._record_event_inner(
|
||||
"slot_fields_updated",
|
||||
{"slot_id": slot_id, **fields},
|
||||
)
|
||||
self._conn.execute("COMMIT")
|
||||
except Exception:
|
||||
self._conn.execute("ROLLBACK")
|
||||
raise
|
||||
|
||||
# -- Reservation operations ---------------------------------------------
|
||||
|
||||
def create_reservation(
|
||||
self,
|
||||
system: str,
|
||||
reason: str,
|
||||
build_id: int | None,
|
||||
ttl_seconds: int,
|
||||
) -> dict:
|
||||
"""Create a new pending reservation. Returns the reservation row as dict."""
|
||||
with self._lock:
|
||||
now = _now_iso(self._clock)
|
||||
if self._clock is not None:
|
||||
expires = (self._clock.now() + timedelta(seconds=ttl_seconds)).isoformat()
|
||||
else:
|
||||
expires = (datetime.now(UTC) + timedelta(seconds=ttl_seconds)).isoformat()
|
||||
rid = f"resv_{uuid.uuid4().hex}"
|
||||
|
||||
self._conn.execute("BEGIN IMMEDIATE")
|
||||
try:
|
||||
self._conn.execute(
|
||||
"""INSERT INTO reservations
|
||||
(reservation_id, system, phase, created_at, updated_at,
|
||||
expires_at, reason, build_id)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)""",
|
||||
(
|
||||
rid,
|
||||
system,
|
||||
ReservationPhase.PENDING.value,
|
||||
now,
|
||||
now,
|
||||
expires,
|
||||
reason,
|
||||
build_id,
|
||||
),
|
||||
)
|
||||
self._record_event_inner(
|
||||
"reservation_created",
|
||||
{"reservation_id": rid, "system": system, "reason": reason},
|
||||
)
|
||||
self._conn.execute("COMMIT")
|
||||
except Exception:
|
||||
self._conn.execute("ROLLBACK")
|
||||
raise
|
||||
|
||||
return self.get_reservation(rid) # type: ignore[return-value]
|
||||
|
||||
def get_reservation(self, reservation_id: str) -> dict | None:
|
||||
"""Return a reservation row as dict, or None."""
|
||||
with self._lock:
|
||||
cur = self._conn.execute(
|
||||
"SELECT * FROM reservations WHERE reservation_id = ?", (reservation_id,)
|
||||
)
|
||||
row = cur.fetchone()
|
||||
if row is None:
|
||||
return None
|
||||
return _row_to_dict(cur, row)
|
||||
|
||||
def list_reservations(self, phase: ReservationPhase | None = None) -> list[dict]:
|
||||
"""List reservations, optionally filtered by phase."""
|
||||
with self._lock:
|
||||
if phase is not None:
|
||||
cur = self._conn.execute(
|
||||
"SELECT * FROM reservations WHERE phase = ? ORDER BY created_at",
|
||||
(phase.value,),
|
||||
)
|
||||
else:
|
||||
cur = self._conn.execute("SELECT * FROM reservations ORDER BY created_at")
|
||||
return [_row_to_dict(cur, row) for row in cur.fetchall()]
|
||||
|
||||
def assign_reservation(self, reservation_id: str, slot_id: str, instance_id: str) -> None:
|
||||
"""Assign a pending reservation to a ready slot.
|
||||
|
||||
Atomically: update reservation phase to ready, set slot_id/instance_id,
|
||||
and increment slot lease_count.
|
||||
"""
|
||||
with self._lock:
|
||||
now = _now_iso(self._clock)
|
||||
|
||||
self._conn.execute("BEGIN IMMEDIATE")
|
||||
try:
|
||||
self._conn.execute(
|
||||
"""UPDATE reservations
|
||||
SET phase = ?, slot_id = ?, instance_id = ?, updated_at = ?
|
||||
WHERE reservation_id = ? AND phase = ?""",
|
||||
(
|
||||
ReservationPhase.READY.value,
|
||||
slot_id,
|
||||
instance_id,
|
||||
now,
|
||||
reservation_id,
|
||||
ReservationPhase.PENDING.value,
|
||||
),
|
||||
)
|
||||
self._conn.execute(
|
||||
"UPDATE slots SET lease_count = lease_count + 1 WHERE slot_id = ?",
|
||||
(slot_id,),
|
||||
)
|
||||
self._record_event_inner(
|
||||
"reservation_assigned",
|
||||
{
|
||||
"reservation_id": reservation_id,
|
||||
"slot_id": slot_id,
|
||||
"instance_id": instance_id,
|
||||
},
|
||||
)
|
||||
self._conn.execute("COMMIT")
|
||||
except Exception:
|
||||
self._conn.execute("ROLLBACK")
|
||||
raise
|
||||
|
||||
def release_reservation(self, reservation_id: str) -> dict | None:
|
||||
"""Release a reservation, decrementing the slot lease count."""
|
||||
with self._lock:
|
||||
now = _now_iso(self._clock)
|
||||
|
||||
self._conn.execute("BEGIN IMMEDIATE")
|
||||
try:
|
||||
cur = self._conn.execute(
|
||||
"SELECT * FROM reservations WHERE reservation_id = ?",
|
||||
(reservation_id,),
|
||||
)
|
||||
row = cur.fetchone()
|
||||
if row is None:
|
||||
self._conn.execute("ROLLBACK")
|
||||
return None
|
||||
|
||||
resv = _row_to_dict(cur, row)
|
||||
old_phase = resv["phase"]
|
||||
|
||||
if old_phase in (ReservationPhase.RELEASED.value, ReservationPhase.EXPIRED.value):
|
||||
self._conn.execute("ROLLBACK")
|
||||
return resv
|
||||
|
||||
self._conn.execute(
|
||||
"""UPDATE reservations
|
||||
SET phase = ?, released_at = ?, updated_at = ?
|
||||
WHERE reservation_id = ?""",
|
||||
(ReservationPhase.RELEASED.value, now, now, reservation_id),
|
||||
)
|
||||
|
||||
if resv["slot_id"] and old_phase == ReservationPhase.READY.value:
|
||||
self._conn.execute(
|
||||
"""UPDATE slots SET lease_count = MAX(lease_count - 1, 0)
|
||||
WHERE slot_id = ?""",
|
||||
(resv["slot_id"],),
|
||||
)
|
||||
|
||||
self._record_event_inner("reservation_released", {"reservation_id": reservation_id})
|
||||
self._conn.execute("COMMIT")
|
||||
except Exception:
|
||||
self._conn.execute("ROLLBACK")
|
||||
raise
|
||||
|
||||
return self.get_reservation(reservation_id)
|
||||
|
||||
def expire_reservations(self, now: datetime) -> list[str]:
|
||||
"""Expire all reservations past their expires_at. Returns expired IDs."""
|
||||
with self._lock:
|
||||
now_iso = now.isoformat()
|
||||
expired_ids: list[str] = []
|
||||
|
||||
self._conn.execute("BEGIN IMMEDIATE")
|
||||
try:
|
||||
cur = self._conn.execute(
|
||||
"""SELECT reservation_id, slot_id, phase FROM reservations
|
||||
WHERE phase IN (?, ?) AND expires_at <= ?""",
|
||||
(ReservationPhase.PENDING.value, ReservationPhase.READY.value, now_iso),
|
||||
)
|
||||
rows = cur.fetchall()
|
||||
|
||||
for row in rows:
|
||||
rid, slot_id, phase = row
|
||||
expired_ids.append(rid)
|
||||
self._conn.execute(
|
||||
"""UPDATE reservations
|
||||
SET phase = ?, updated_at = ?
|
||||
WHERE reservation_id = ?""",
|
||||
(ReservationPhase.EXPIRED.value, now_iso, rid),
|
||||
)
|
||||
if slot_id and phase == ReservationPhase.READY.value:
|
||||
self._conn.execute(
|
||||
"""UPDATE slots SET lease_count = MAX(lease_count - 1, 0)
|
||||
WHERE slot_id = ?""",
|
||||
(slot_id,),
|
||||
)
|
||||
self._record_event_inner("reservation_expired", {"reservation_id": rid})
|
||||
|
||||
self._conn.execute("COMMIT")
|
||||
except Exception:
|
||||
self._conn.execute("ROLLBACK")
|
||||
raise
|
||||
|
||||
return expired_ids
|
||||
|
||||
# -- Events -------------------------------------------------------------
|
||||
|
||||
def record_event(self, kind: str, payload: dict) -> None: # type: ignore[type-arg]
|
||||
"""Record an audit event."""
|
||||
with self._lock:
|
||||
self._conn.execute("BEGIN IMMEDIATE")
|
||||
try:
|
||||
self._record_event_inner(kind, payload)
|
||||
self._conn.execute("COMMIT")
|
||||
except Exception:
|
||||
self._conn.execute("ROLLBACK")
|
||||
raise
|
||||
|
||||
def _record_event_inner(self, kind: str, payload: dict) -> None: # type: ignore[type-arg]
|
||||
"""Insert an event row (must be called inside an active transaction)."""
|
||||
with self._lock:
|
||||
now = _now_iso(self._clock)
|
||||
self._conn.execute(
|
||||
"INSERT INTO events (ts, kind, payload_json) VALUES (?, ?, ?)",
|
||||
(now, kind, json.dumps(payload, default=str)),
|
||||
)
|
||||
|
||||
# -- Summaries ----------------------------------------------------------
|
||||
|
||||
def get_state_summary(self) -> dict:
|
||||
"""Return aggregate slot and reservation counts."""
|
||||
with self._lock:
|
||||
slot_counts: dict[str, int] = {}
|
||||
cur = self._conn.execute("SELECT state, COUNT(*) FROM slots GROUP BY state")
|
||||
for state_val, count in cur.fetchall():
|
||||
slot_counts[state_val] = count
|
||||
|
||||
total_slots = sum(slot_counts.values())
|
||||
|
||||
resv_counts: dict[str, int] = {}
|
||||
cur = self._conn.execute(
|
||||
"SELECT phase, COUNT(*) FROM reservations WHERE phase IN (?, ?, ?) GROUP BY phase",
|
||||
(
|
||||
ReservationPhase.PENDING.value,
|
||||
ReservationPhase.READY.value,
|
||||
ReservationPhase.FAILED.value,
|
||||
),
|
||||
)
|
||||
for phase_val, count in cur.fetchall():
|
||||
resv_counts[phase_val] = count
|
||||
|
||||
return {
|
||||
"slots": {
|
||||
"total": total_slots,
|
||||
"ready": slot_counts.get("ready", 0),
|
||||
"launching": slot_counts.get("launching", 0),
|
||||
"booting": slot_counts.get("booting", 0),
|
||||
"binding": slot_counts.get("binding", 0),
|
||||
"draining": slot_counts.get("draining", 0),
|
||||
"terminating": slot_counts.get("terminating", 0),
|
||||
"empty": slot_counts.get("empty", 0),
|
||||
"error": slot_counts.get("error", 0),
|
||||
},
|
||||
"reservations": {
|
||||
"pending": resv_counts.get("pending", 0),
|
||||
"ready": resv_counts.get("ready", 0),
|
||||
"failed": resv_counts.get("failed", 0),
|
||||
},
|
||||
}
|
||||
|
||||
def close(self) -> None:
|
||||
"""Close the database connection."""
|
||||
with self._lock:
|
||||
self._conn.close()
|
||||
0
agent/nix_builder_autoscaler/tests/__init__.py
Normal file
0
agent/nix_builder_autoscaler/tests/__init__.py
Normal file
|
|
@ -0,0 +1,407 @@
|
|||
"""End-to-end integration tests with FakeRuntime and a fake HAProxy socket."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import socket
|
||||
import threading
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from nix_builder_autoscaler.api import create_app
|
||||
from nix_builder_autoscaler.config import (
|
||||
AppConfig,
|
||||
AwsConfig,
|
||||
CapacityConfig,
|
||||
HaproxyConfig,
|
||||
SchedulerConfig,
|
||||
)
|
||||
from nix_builder_autoscaler.metrics import MetricsRegistry
|
||||
from nix_builder_autoscaler.models import SlotState
|
||||
from nix_builder_autoscaler.providers.clock import FakeClock
|
||||
from nix_builder_autoscaler.providers.haproxy import HAProxyRuntime
|
||||
from nix_builder_autoscaler.reconciler import Reconciler
|
||||
from nix_builder_autoscaler.runtime.fake import FakeRuntime
|
||||
from nix_builder_autoscaler.scheduler import scheduling_tick
|
||||
from nix_builder_autoscaler.state_db import StateDB
|
||||
|
||||
|
||||
class FakeHAProxySocketServer:
|
||||
"""Tiny fake HAProxy runtime socket server for integration tests."""
|
||||
|
||||
def __init__(self, socket_path: Path, backend: str, slot_ids: list[str]) -> None:
|
||||
self._socket_path = socket_path
|
||||
self._backend = backend
|
||||
self._slot_ids = slot_ids
|
||||
self._stop_event = threading.Event()
|
||||
self._thread: threading.Thread | None = None
|
||||
self._lock = threading.Lock()
|
||||
self._state: dict[str, dict[str, object]] = {
|
||||
slot_id: {
|
||||
"enabled": False,
|
||||
"addr": "0.0.0.0",
|
||||
"port": 22,
|
||||
"status": "MAINT",
|
||||
"scur": 0,
|
||||
"qcur": 0,
|
||||
}
|
||||
for slot_id in slot_ids
|
||||
}
|
||||
|
||||
def start(self) -> None:
|
||||
self._thread = threading.Thread(target=self._serve, name="fake-haproxy", daemon=True)
|
||||
self._thread.start()
|
||||
deadline = time.time() + 2.0
|
||||
while time.time() < deadline:
|
||||
if self._socket_path.exists():
|
||||
return
|
||||
time.sleep(0.01)
|
||||
msg = f"fake haproxy socket not created: {self._socket_path}"
|
||||
raise RuntimeError(msg)
|
||||
|
||||
def stop(self) -> None:
|
||||
self._stop_event.set()
|
||||
try:
|
||||
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as sock:
|
||||
sock.connect(str(self._socket_path))
|
||||
sock.sendall(b"\n")
|
||||
except OSError:
|
||||
pass
|
||||
if self._thread is not None:
|
||||
self._thread.join(timeout=2.0)
|
||||
if self._socket_path.exists():
|
||||
self._socket_path.unlink()
|
||||
|
||||
def _serve(self) -> None:
|
||||
if self._socket_path.exists():
|
||||
self._socket_path.unlink()
|
||||
|
||||
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as server:
|
||||
server.bind(str(self._socket_path))
|
||||
server.listen(16)
|
||||
server.settimeout(0.2)
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
conn, _ = server.accept()
|
||||
except TimeoutError:
|
||||
continue
|
||||
except OSError:
|
||||
if self._stop_event.is_set():
|
||||
break
|
||||
continue
|
||||
with conn:
|
||||
payload = b""
|
||||
while True:
|
||||
chunk = conn.recv(4096)
|
||||
if not chunk:
|
||||
break
|
||||
payload += chunk
|
||||
command = payload.decode().strip()
|
||||
response = self._handle_command(command)
|
||||
try:
|
||||
conn.sendall(response.encode())
|
||||
except BrokenPipeError:
|
||||
continue
|
||||
|
||||
def _handle_command(self, command: str) -> str:
|
||||
if command == "show stat":
|
||||
return self._render_show_stat()
|
||||
|
||||
parts = command.split()
|
||||
if not parts:
|
||||
return "\n"
|
||||
|
||||
if parts[0:2] == ["set", "server"] and len(parts) >= 7:
|
||||
slot_id = self._parse_slot(parts[2])
|
||||
if slot_id is None:
|
||||
return "No such server.\n"
|
||||
with self._lock:
|
||||
slot_state = self._state[slot_id]
|
||||
slot_state["addr"] = parts[4]
|
||||
slot_state["port"] = int(parts[6])
|
||||
slot_state["status"] = "UP" if slot_state["enabled"] else "DOWN"
|
||||
return "\n"
|
||||
|
||||
if parts[0:2] == ["enable", "server"] and len(parts) >= 3:
|
||||
slot_id = self._parse_slot(parts[2])
|
||||
if slot_id is None:
|
||||
return "No such server.\n"
|
||||
with self._lock:
|
||||
slot_state = self._state[slot_id]
|
||||
slot_state["enabled"] = True
|
||||
slot_state["status"] = "UP"
|
||||
return "\n"
|
||||
|
||||
if parts[0:2] == ["disable", "server"] and len(parts) >= 3:
|
||||
slot_id = self._parse_slot(parts[2])
|
||||
if slot_id is None:
|
||||
return "No such server.\n"
|
||||
with self._lock:
|
||||
slot_state = self._state[slot_id]
|
||||
slot_state["enabled"] = False
|
||||
slot_state["status"] = "MAINT"
|
||||
return "\n"
|
||||
|
||||
return "Unknown command.\n"
|
||||
|
||||
def _parse_slot(self, backend_slot: str) -> str | None:
|
||||
backend, _, slot_id = backend_slot.partition("/")
|
||||
if backend != self._backend or slot_id not in self._state:
|
||||
return None
|
||||
return slot_id
|
||||
|
||||
def _render_show_stat(self) -> str:
|
||||
header = "# pxname,svname,qcur,qmax,scur,smax,slim,stot,status\n"
|
||||
rows = [f"{self._backend},BACKEND,0,0,0,0,0,0,UP\n"]
|
||||
with self._lock:
|
||||
for slot_id in self._slot_ids:
|
||||
slot_state = self._state[slot_id]
|
||||
rows.append(
|
||||
f"{self._backend},{slot_id},{slot_state['qcur']},0,"
|
||||
f"{slot_state['scur']},0,50,0,{slot_state['status']}\n"
|
||||
)
|
||||
return header + "".join(rows)
|
||||
|
||||
|
||||
class DaemonHarness:
|
||||
"""In-process threaded harness for scheduler/reconciler/API integration."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
root: Path,
|
||||
*,
|
||||
db_path: Path | None = None,
|
||||
runtime: FakeRuntime | None = None,
|
||||
max_slots: int = 3,
|
||||
min_slots: int = 0,
|
||||
idle_scale_down_seconds: int = 1,
|
||||
drain_timeout_seconds: int = 120,
|
||||
) -> None:
|
||||
root.mkdir(parents=True, exist_ok=True)
|
||||
self.clock = FakeClock()
|
||||
self.metrics = MetricsRegistry()
|
||||
self.runtime = runtime or FakeRuntime(launch_latency_ticks=2, ip_delay_ticks=1)
|
||||
self._stop_event = threading.Event()
|
||||
self._threads: list[threading.Thread] = []
|
||||
self._reconcile_lock = threading.Lock()
|
||||
|
||||
self._db_path = db_path or (root / "state.db")
|
||||
self._socket_path = root / "haproxy.sock"
|
||||
self._slot_ids = [f"slot{i:03d}" for i in range(1, 4)]
|
||||
|
||||
self.config = AppConfig(
|
||||
aws=AwsConfig(region="us-east-1"),
|
||||
haproxy=HaproxyConfig(
|
||||
runtime_socket=str(self._socket_path),
|
||||
backend="all",
|
||||
slot_prefix="slot",
|
||||
slot_count=3,
|
||||
check_ready_up_count=1,
|
||||
),
|
||||
capacity=CapacityConfig(
|
||||
default_system="x86_64-linux",
|
||||
max_slots=max_slots,
|
||||
min_slots=min_slots,
|
||||
max_leases_per_slot=1,
|
||||
target_warm_slots=0,
|
||||
reservation_ttl_seconds=1200,
|
||||
idle_scale_down_seconds=idle_scale_down_seconds,
|
||||
drain_timeout_seconds=drain_timeout_seconds,
|
||||
),
|
||||
scheduler=SchedulerConfig(tick_seconds=0.05, reconcile_seconds=0.05),
|
||||
)
|
||||
|
||||
self.db = StateDB(str(self._db_path), clock=self.clock)
|
||||
self.db.init_schema()
|
||||
self.db.init_slots("slot", 3, "x86_64-linux", "all")
|
||||
|
||||
self.haproxy_server = FakeHAProxySocketServer(self._socket_path, "all", self._slot_ids)
|
||||
self.haproxy = HAProxyRuntime(str(self._socket_path), "all", "slot")
|
||||
self.reconciler = Reconciler(
|
||||
self.db,
|
||||
self.runtime,
|
||||
self.haproxy,
|
||||
self.config,
|
||||
self.clock,
|
||||
self.metrics,
|
||||
)
|
||||
|
||||
app = create_app(
|
||||
self.db,
|
||||
self.config,
|
||||
self.clock,
|
||||
self.metrics,
|
||||
reconcile_now=self.reconcile_now,
|
||||
)
|
||||
self.client = TestClient(app)
|
||||
|
||||
def start(self) -> None:
|
||||
self.haproxy_server.start()
|
||||
with self._reconcile_lock:
|
||||
self.runtime.tick()
|
||||
self.reconciler.tick()
|
||||
self._threads = [
|
||||
threading.Thread(target=self._scheduler_loop, name="sched", daemon=True),
|
||||
threading.Thread(target=self._reconciler_loop, name="recon", daemon=True),
|
||||
]
|
||||
for thread in self._threads:
|
||||
thread.start()
|
||||
|
||||
def stop(self) -> None:
|
||||
self._stop_event.set()
|
||||
for thread in self._threads:
|
||||
thread.join(timeout=2.0)
|
||||
self.client.close()
|
||||
self.haproxy_server.stop()
|
||||
self.db.close()
|
||||
|
||||
def create_reservation(self, reason: str) -> str:
|
||||
response = self.client.post(
|
||||
"/v1/reservations",
|
||||
json={"system": "x86_64-linux", "reason": reason},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
return str(response.json()["reservation_id"])
|
||||
|
||||
def release_reservation(self, reservation_id: str) -> None:
|
||||
response = self.client.post(f"/v1/reservations/{reservation_id}/release")
|
||||
assert response.status_code == 200
|
||||
|
||||
def reservation(self, reservation_id: str) -> dict:
|
||||
response = self.client.get(f"/v1/reservations/{reservation_id}")
|
||||
assert response.status_code == 200
|
||||
return response.json()
|
||||
|
||||
def wait_for(self, predicate, timeout: float = 6.0) -> None: # noqa: ANN001
|
||||
deadline = time.time() + timeout
|
||||
while time.time() < deadline:
|
||||
if predicate():
|
||||
return
|
||||
time.sleep(0.02)
|
||||
raise AssertionError("condition not met before timeout")
|
||||
|
||||
def reconcile_now(self) -> dict[str, bool]:
|
||||
with self._reconcile_lock:
|
||||
self.runtime.tick()
|
||||
self.reconciler.tick()
|
||||
return {"triggered": True}
|
||||
|
||||
def _scheduler_loop(self) -> None:
|
||||
while not self._stop_event.is_set():
|
||||
scheduling_tick(self.db, self.runtime, self.config, self.clock, self.metrics)
|
||||
self._stop_event.wait(self.config.scheduler.tick_seconds)
|
||||
|
||||
def _reconciler_loop(self) -> None:
|
||||
while not self._stop_event.is_set():
|
||||
with self._reconcile_lock:
|
||||
self.runtime.tick()
|
||||
self.reconciler.tick()
|
||||
self._stop_event.wait(self.config.scheduler.reconcile_seconds)
|
||||
|
||||
|
||||
def test_cold_start_reservation_launch_bind_ready(tmp_path: Path) -> None:
|
||||
harness = DaemonHarness(tmp_path)
|
||||
harness.start()
|
||||
try:
|
||||
reservation_id = harness.create_reservation("cold-start")
|
||||
harness.wait_for(lambda: harness.reservation(reservation_id)["phase"] == "ready")
|
||||
reservation = harness.reservation(reservation_id)
|
||||
assert reservation["slot"] is not None
|
||||
slot = harness.db.get_slot(reservation["slot"])
|
||||
assert slot is not None
|
||||
assert slot["state"] == SlotState.READY.value
|
||||
assert slot["instance_ip"] is not None
|
||||
finally:
|
||||
harness.stop()
|
||||
|
||||
|
||||
def test_burst_three_concurrent_reservations(tmp_path: Path) -> None:
|
||||
harness = DaemonHarness(tmp_path, max_slots=3)
|
||||
harness.start()
|
||||
try:
|
||||
reservation_ids = [harness.create_reservation(f"burst-{i}") for i in range(3)]
|
||||
harness.wait_for(
|
||||
lambda: all(harness.reservation(rid)["phase"] == "ready" for rid in reservation_ids),
|
||||
timeout=8.0,
|
||||
)
|
||||
slots = [harness.reservation(rid)["slot"] for rid in reservation_ids]
|
||||
assert len(set(slots)) == 3
|
||||
finally:
|
||||
harness.stop()
|
||||
|
||||
|
||||
def test_scale_down_after_release_and_idle_timeout(tmp_path: Path) -> None:
|
||||
harness = DaemonHarness(tmp_path, idle_scale_down_seconds=1, drain_timeout_seconds=0)
|
||||
harness.start()
|
||||
try:
|
||||
reservation_id = harness.create_reservation("scale-down")
|
||||
harness.wait_for(lambda: harness.reservation(reservation_id)["phase"] == "ready")
|
||||
slot_id = str(harness.reservation(reservation_id)["slot"])
|
||||
|
||||
harness.release_reservation(reservation_id)
|
||||
harness.clock.advance(2)
|
||||
harness.wait_for(
|
||||
lambda: (
|
||||
harness.db.get_slot(slot_id) is not None
|
||||
and harness.db.get_slot(slot_id)["state"] == SlotState.EMPTY.value
|
||||
)
|
||||
)
|
||||
finally:
|
||||
harness.stop()
|
||||
|
||||
|
||||
def test_restart_recovery_midflight(tmp_path: Path) -> None:
|
||||
db_path = tmp_path / "state.db"
|
||||
runtime = FakeRuntime(launch_latency_ticks=6, ip_delay_ticks=2)
|
||||
|
||||
first = DaemonHarness(tmp_path / "run1", db_path=db_path, runtime=runtime)
|
||||
first.start()
|
||||
reservation_id = first.create_reservation("restart-midflight")
|
||||
first.wait_for(
|
||||
lambda: len(first.db.list_slots(SlotState.LAUNCHING)) > 0,
|
||||
timeout=4.0,
|
||||
)
|
||||
first.stop()
|
||||
|
||||
second = DaemonHarness(tmp_path / "run2", db_path=db_path, runtime=runtime)
|
||||
second.start()
|
||||
try:
|
||||
second.wait_for(lambda: second.reservation(reservation_id)["phase"] == "ready", timeout=8.0)
|
||||
finally:
|
||||
second.stop()
|
||||
|
||||
|
||||
def test_interruption_recovery_pending_reservation_resolves(tmp_path: Path) -> None:
|
||||
harness = DaemonHarness(tmp_path, max_slots=2, idle_scale_down_seconds=60)
|
||||
harness.start()
|
||||
try:
|
||||
first_reservation = harness.create_reservation("baseline")
|
||||
harness.wait_for(lambda: harness.reservation(first_reservation)["phase"] == "ready")
|
||||
slot_id = str(harness.reservation(first_reservation)["slot"])
|
||||
instance_id = str(harness.reservation(first_reservation)["instance_id"])
|
||||
|
||||
second_reservation = harness.create_reservation("post-interruption")
|
||||
harness.release_reservation(first_reservation)
|
||||
|
||||
harness.runtime.inject_interruption(instance_id)
|
||||
harness.runtime._instances[instance_id].state = "shutting-down"
|
||||
|
||||
harness.wait_for(
|
||||
lambda: (
|
||||
harness.db.get_slot(slot_id) is not None
|
||||
and harness.db.get_slot(slot_id)["state"]
|
||||
in {
|
||||
SlotState.DRAINING.value,
|
||||
SlotState.TERMINATING.value,
|
||||
SlotState.EMPTY.value,
|
||||
}
|
||||
),
|
||||
timeout=6.0,
|
||||
)
|
||||
harness.wait_for(
|
||||
lambda: harness.reservation(second_reservation)["phase"] == "ready",
|
||||
timeout=10.0,
|
||||
)
|
||||
finally:
|
||||
harness.stop()
|
||||
148
agent/nix_builder_autoscaler/tests/test_haproxy_provider.py
Normal file
148
agent/nix_builder_autoscaler/tests/test_haproxy_provider.py
Normal file
|
|
@ -0,0 +1,148 @@
|
|||
"""Unit tests for the HAProxy provider, mocking at socket level."""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from nix_builder_autoscaler.providers.haproxy import HAProxyError, HAProxyRuntime
|
||||
|
||||
# HAProxy `show stat` CSV — trimmed to columns the parser uses.
|
||||
# Full output has many more columns; we keep through `status` (col 17).
|
||||
SHOW_STAT_CSV = (
|
||||
"# pxname,svname,qcur,qmax,scur,smax,slim,stot,"
|
||||
"bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status\n"
|
||||
"all,BACKEND,0,0,2,5,200,100,5000,6000,0,0,,0,0,0,0,UP\n"
|
||||
"all,slot001,0,0,1,3,50,50,2500,3000,0,0,,0,0,0,0,UP\n"
|
||||
"all,slot002,0,0,1,2,50,50,2500,3000,0,0,,0,0,0,0,DOWN\n"
|
||||
"all,slot003,0,0,0,0,50,0,0,0,0,0,,0,0,0,0,MAINT\n"
|
||||
)
|
||||
|
||||
|
||||
class TestSetSlotAddr:
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_sends_correct_command(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.recv.side_effect = [b"\n", b""]
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
h.set_slot_addr("slot001", "100.64.0.1", 22)
|
||||
|
||||
mock_sock.connect.assert_called_once_with("/tmp/test.sock")
|
||||
mock_sock.sendall.assert_called_once_with(
|
||||
b"set server all/slot001 addr 100.64.0.1 port 22\n"
|
||||
)
|
||||
|
||||
|
||||
class TestEnableSlot:
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_sends_correct_command(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.recv.side_effect = [b"\n", b""]
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
h.enable_slot("slot001")
|
||||
|
||||
mock_sock.sendall.assert_called_once_with(b"enable server all/slot001\n")
|
||||
|
||||
|
||||
class TestDisableSlot:
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_sends_correct_command(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.recv.side_effect = [b"\n", b""]
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
h.disable_slot("slot001")
|
||||
|
||||
mock_sock.sendall.assert_called_once_with(b"disable server all/slot001\n")
|
||||
|
||||
|
||||
class TestReadSlotHealth:
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_parses_csv_correctly(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.recv.side_effect = [SHOW_STAT_CSV.encode(), b""]
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
health = h.read_slot_health()
|
||||
|
||||
assert len(health) == 3
|
||||
# BACKEND row should be excluded (svname "BACKEND" doesn't start with "slot")
|
||||
|
||||
assert health["slot001"].status == "UP"
|
||||
assert health["slot001"].scur == 1
|
||||
assert health["slot001"].qcur == 0
|
||||
|
||||
assert health["slot002"].status == "DOWN"
|
||||
assert health["slot002"].scur == 1
|
||||
assert health["slot002"].qcur == 0
|
||||
|
||||
assert health["slot003"].status == "MAINT"
|
||||
assert health["slot003"].scur == 0
|
||||
assert health["slot003"].qcur == 0
|
||||
|
||||
|
||||
class TestSlotIsUp:
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_returns_true_for_up_slot(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.recv.side_effect = [SHOW_STAT_CSV.encode(), b""]
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
assert h.slot_is_up("slot001") is True
|
||||
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_returns_false_for_down_slot(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.recv.side_effect = [SHOW_STAT_CSV.encode(), b""]
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
assert h.slot_is_up("slot002") is False
|
||||
|
||||
|
||||
class TestErrorHandling:
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_unrecognized_slot_raises_haproxy_error(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.recv.side_effect = [b"No such server.\n", b""]
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
with pytest.raises(HAProxyError, match="No such server"):
|
||||
h.set_slot_addr("slot999", "100.64.0.1")
|
||||
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_socket_not_found_raises_haproxy_error(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.connect.side_effect = FileNotFoundError("No such file")
|
||||
|
||||
h = HAProxyRuntime("/tmp/nonexistent.sock", "all", "slot")
|
||||
with pytest.raises(HAProxyError, match="socket not found"):
|
||||
h.set_slot_addr("slot001", "100.64.0.1")
|
||||
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_connection_refused_raises_haproxy_error(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.connect.side_effect = ConnectionRefusedError("Connection refused")
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
with pytest.raises(HAProxyError, match="Connection refused"):
|
||||
h.enable_slot("slot001")
|
||||
|
||||
@patch("nix_builder_autoscaler.providers.haproxy.socket.socket")
|
||||
def test_slot_session_count_missing_slot_raises(self, mock_socket_cls):
|
||||
mock_sock = MagicMock()
|
||||
mock_socket_cls.return_value = mock_sock
|
||||
mock_sock.recv.side_effect = [SHOW_STAT_CSV.encode(), b""]
|
||||
|
||||
h = HAProxyRuntime("/tmp/test.sock", "all", "slot")
|
||||
with pytest.raises(HAProxyError, match="Slot not found"):
|
||||
h.slot_session_count("slot_nonexistent")
|
||||
235
agent/nix_builder_autoscaler/tests/test_reservations_api.py
Normal file
235
agent/nix_builder_autoscaler/tests/test_reservations_api.py
Normal file
|
|
@ -0,0 +1,235 @@
|
|||
"""Reservations API unit tests."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import UTC, datetime
|
||||
from typing import Any
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from nix_builder_autoscaler.api import create_app
|
||||
from nix_builder_autoscaler.config import AppConfig, CapacityConfig
|
||||
from nix_builder_autoscaler.metrics import MetricsRegistry
|
||||
from nix_builder_autoscaler.models import SlotState
|
||||
from nix_builder_autoscaler.providers.clock import FakeClock
|
||||
from nix_builder_autoscaler.state_db import StateDB
|
||||
|
||||
|
||||
def _make_client(
|
||||
*,
|
||||
reconcile_now: Any = None, # noqa: ANN401
|
||||
) -> tuple[TestClient, StateDB, FakeClock, MetricsRegistry]:
|
||||
clock = FakeClock()
|
||||
db = StateDB(":memory:", clock=clock)
|
||||
db.init_schema()
|
||||
db.init_slots("slot", 3, "x86_64-linux", "all")
|
||||
config = AppConfig(capacity=CapacityConfig(reservation_ttl_seconds=1200))
|
||||
metrics = MetricsRegistry()
|
||||
app = create_app(db, config, clock, metrics, reconcile_now=reconcile_now)
|
||||
return TestClient(app), db, clock, metrics
|
||||
|
||||
|
||||
def test_create_reservation_returns_200() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.post("/v1/reservations", json={"system": "x86_64-linux", "reason": "test"})
|
||||
assert response.status_code == 200
|
||||
body = response.json()
|
||||
assert body["reservation_id"].startswith("resv_")
|
||||
assert body["phase"] == "pending"
|
||||
assert body["system"] == "x86_64-linux"
|
||||
assert "created_at" in body
|
||||
assert "expires_at" in body
|
||||
|
||||
|
||||
def test_get_reservation_returns_current_phase() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
created = client.post("/v1/reservations", json={"system": "x86_64-linux", "reason": "test"})
|
||||
reservation_id = created.json()["reservation_id"]
|
||||
response = client.get(f"/v1/reservations/{reservation_id}")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["phase"] == "pending"
|
||||
|
||||
|
||||
def test_release_reservation_moves_to_released() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
created = client.post("/v1/reservations", json={"system": "x86_64-linux", "reason": "test"})
|
||||
reservation_id = created.json()["reservation_id"]
|
||||
response = client.post(f"/v1/reservations/{reservation_id}/release")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["phase"] == "released"
|
||||
|
||||
|
||||
def test_unknown_reservation_returns_404() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.get("/v1/reservations/resv_nonexistent")
|
||||
assert response.status_code == 404
|
||||
body = response.json()
|
||||
assert body["error"]["code"] == "not_found"
|
||||
assert "request_id" in body
|
||||
|
||||
|
||||
def test_malformed_body_returns_422() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.post("/v1/reservations", json={"invalid": "data"})
|
||||
assert response.status_code == 422
|
||||
|
||||
|
||||
def test_list_reservations_returns_all() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
client.post("/v1/reservations", json={"system": "x86_64-linux", "reason": "a"})
|
||||
client.post("/v1/reservations", json={"system": "x86_64-linux", "reason": "b"})
|
||||
response = client.get("/v1/reservations")
|
||||
assert response.status_code == 200
|
||||
assert len(response.json()) == 2
|
||||
|
||||
|
||||
def test_list_reservations_filters_by_phase() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
created = client.post("/v1/reservations", json={"system": "x86_64-linux", "reason": "test"})
|
||||
reservation_id = created.json()["reservation_id"]
|
||||
client.post(f"/v1/reservations/{reservation_id}/release")
|
||||
response = client.get("/v1/reservations?phase=released")
|
||||
assert response.status_code == 200
|
||||
body = response.json()
|
||||
assert len(body) == 1
|
||||
assert body[0]["phase"] == "released"
|
||||
|
||||
|
||||
def test_list_slots_returns_all_slots() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.get("/v1/slots")
|
||||
assert response.status_code == 200
|
||||
assert len(response.json()) == 3
|
||||
|
||||
|
||||
def test_state_summary_returns_counts() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.get("/v1/state/summary")
|
||||
assert response.status_code == 200
|
||||
body = response.json()
|
||||
assert body["slots"]["total"] == 3
|
||||
assert body["slots"]["empty"] == 3
|
||||
|
||||
|
||||
def test_health_live_returns_ok() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.get("/health/live")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "ok"
|
||||
|
||||
|
||||
def test_health_ready_returns_ok_when_no_checks() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.get("/health/ready")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "ok"
|
||||
|
||||
|
||||
def test_health_ready_degraded_when_ready_check_fails() -> None:
|
||||
clock = FakeClock()
|
||||
db = StateDB(":memory:", clock=clock)
|
||||
db.init_schema()
|
||||
db.init_slots("slot", 3, "x86_64-linux", "all")
|
||||
config = AppConfig(capacity=CapacityConfig(reservation_ttl_seconds=1200))
|
||||
metrics = MetricsRegistry()
|
||||
app = create_app(db, config, clock, metrics, ready_check=lambda: False)
|
||||
client = TestClient(app)
|
||||
response = client.get("/health/ready")
|
||||
assert response.status_code == 503
|
||||
assert response.json()["status"] == "degraded"
|
||||
|
||||
|
||||
def test_metrics_returns_prometheus_text() -> None:
|
||||
client, _, _, metrics = _make_client()
|
||||
metrics.counter("autoscaler_test_counter", {}, 1.0)
|
||||
response = client.get("/metrics")
|
||||
assert response.status_code == 200
|
||||
assert "text/plain" in response.headers["content-type"]
|
||||
assert "autoscaler_test_counter" in response.text
|
||||
|
||||
|
||||
def test_capacity_hint_accepted() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.post(
|
||||
"/v1/hints/capacity",
|
||||
json={
|
||||
"builder": "buildbot",
|
||||
"queued": 2,
|
||||
"running": 4,
|
||||
"system": "x86_64-linux",
|
||||
"timestamp": datetime(2026, 1, 1, tzinfo=UTC).isoformat(),
|
||||
},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "accepted"
|
||||
|
||||
|
||||
def test_release_nonexistent_returns_404() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.post("/v1/reservations/resv_nonexistent/release")
|
||||
assert response.status_code == 404
|
||||
assert response.json()["error"]["code"] == "not_found"
|
||||
|
||||
|
||||
def test_admin_drain_success() -> None:
|
||||
client, db, _, _ = _make_client()
|
||||
db.update_slot_state("slot001", SlotState.LAUNCHING, instance_id="i-test")
|
||||
db.update_slot_state("slot001", SlotState.BOOTING)
|
||||
db.update_slot_state("slot001", SlotState.BINDING, instance_ip="100.64.0.1")
|
||||
db.update_slot_state("slot001", SlotState.READY)
|
||||
|
||||
response = client.post("/v1/admin/drain", json={"slot_id": "slot001"})
|
||||
assert response.status_code == 200
|
||||
assert response.json()["state"] == "draining"
|
||||
slot = db.get_slot("slot001")
|
||||
assert slot is not None
|
||||
assert slot["state"] == SlotState.DRAINING.value
|
||||
|
||||
|
||||
def test_admin_drain_invalid_state_returns_409() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.post("/v1/admin/drain", json={"slot_id": "slot001"})
|
||||
assert response.status_code == 409
|
||||
assert response.json()["error"]["code"] == "invalid_state"
|
||||
|
||||
|
||||
def test_admin_unquarantine_success() -> None:
|
||||
client, db, _, _ = _make_client()
|
||||
db.update_slot_state("slot001", SlotState.ERROR, instance_id="i-bad")
|
||||
|
||||
response = client.post("/v1/admin/unquarantine", json={"slot_id": "slot001"})
|
||||
assert response.status_code == 200
|
||||
assert response.json()["state"] == "empty"
|
||||
slot = db.get_slot("slot001")
|
||||
assert slot is not None
|
||||
assert slot["state"] == SlotState.EMPTY.value
|
||||
assert slot["instance_id"] is None
|
||||
|
||||
|
||||
def test_admin_unquarantine_invalid_state_returns_409() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.post("/v1/admin/unquarantine", json={"slot_id": "slot001"})
|
||||
assert response.status_code == 409
|
||||
assert response.json()["error"]["code"] == "invalid_state"
|
||||
|
||||
|
||||
def test_admin_reconcile_now_not_configured_returns_503() -> None:
|
||||
client, _, _, _ = _make_client()
|
||||
response = client.post("/v1/admin/reconcile-now")
|
||||
assert response.status_code == 503
|
||||
assert response.json()["error"]["code"] == "not_configured"
|
||||
|
||||
|
||||
def test_admin_reconcile_now_success() -> None:
|
||||
called = {"value": False}
|
||||
|
||||
def _reconcile_now() -> dict[str, object]:
|
||||
called["value"] = True
|
||||
return {"triggered": True}
|
||||
|
||||
client, _, _, _ = _make_client(reconcile_now=_reconcile_now)
|
||||
response = client.post("/v1/admin/reconcile-now")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "accepted"
|
||||
assert response.json()["triggered"] is True
|
||||
assert called["value"] is True
|
||||
415
agent/nix_builder_autoscaler/tests/test_runtime_ec2.py
Normal file
415
agent/nix_builder_autoscaler/tests/test_runtime_ec2.py
Normal file
|
|
@ -0,0 +1,415 @@
|
|||
"""Unit tests for the EC2 runtime adapter using botocore Stubber."""
|
||||
|
||||
from datetime import UTC, datetime
|
||||
from unittest.mock import patch
|
||||
|
||||
import boto3
|
||||
import pytest
|
||||
from botocore.stub import Stubber
|
||||
|
||||
from nix_builder_autoscaler.config import AwsConfig
|
||||
from nix_builder_autoscaler.runtime.base import RuntimeError as RuntimeAdapterError
|
||||
from nix_builder_autoscaler.runtime.ec2 import EC2Runtime
|
||||
|
||||
|
||||
def _make_config():
|
||||
return AwsConfig(
|
||||
region="us-east-1",
|
||||
launch_template_id="lt-abc123",
|
||||
subnet_ids=["subnet-aaa", "subnet-bbb"],
|
||||
security_group_ids=["sg-111"],
|
||||
instance_profile_arn="arn:aws:iam::123456789012:instance-profile/nix-builder",
|
||||
)
|
||||
|
||||
|
||||
def _make_runtime(stubber, ec2_client, **kwargs):
|
||||
config = kwargs.pop("config", _make_config())
|
||||
environment = kwargs.pop("environment", "dev")
|
||||
stubber.activate()
|
||||
return EC2Runtime(config, environment=environment, _client=ec2_client)
|
||||
|
||||
|
||||
class TestLaunchSpot:
|
||||
def test_correct_params_and_returns_instance_id(self):
|
||||
config = _make_config()
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
expected_params = {
|
||||
"MinCount": 1,
|
||||
"MaxCount": 1,
|
||||
"LaunchTemplate": {
|
||||
"LaunchTemplateId": "lt-abc123",
|
||||
"Version": "$Latest",
|
||||
},
|
||||
"InstanceMarketOptions": {
|
||||
"MarketType": "spot",
|
||||
"SpotOptions": {
|
||||
"SpotInstanceType": "one-time",
|
||||
"InstanceInterruptionBehavior": "terminate",
|
||||
},
|
||||
},
|
||||
"SubnetId": "subnet-aaa",
|
||||
"UserData": "#!/bin/bash\necho hello",
|
||||
"TagSpecifications": [
|
||||
{
|
||||
"ResourceType": "instance",
|
||||
"Tags": [
|
||||
{"Key": "Name", "Value": "nix-builder-slot001"},
|
||||
{"Key": "AutoscalerSlot", "Value": "slot001"},
|
||||
{"Key": "ManagedBy", "Value": "nix-builder-autoscaler"},
|
||||
{"Key": "Service", "Value": "nix-builder"},
|
||||
{"Key": "Environment", "Value": "dev"},
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
response = {
|
||||
"Instances": [{"InstanceId": "i-12345678"}],
|
||||
"OwnerId": "123456789012",
|
||||
}
|
||||
|
||||
stubber.add_response("run_instances", response, expected_params)
|
||||
runtime = _make_runtime(stubber, ec2_client, config=config)
|
||||
|
||||
iid = runtime.launch_spot("slot001", "#!/bin/bash\necho hello")
|
||||
assert iid == "i-12345678"
|
||||
stubber.assert_no_pending_responses()
|
||||
|
||||
def test_round_robin_subnets(self):
|
||||
config = _make_config()
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
# Two launches should use subnet-aaa then subnet-bbb
|
||||
for _subnet in ["subnet-aaa", "subnet-bbb"]:
|
||||
stubber.add_response(
|
||||
"run_instances",
|
||||
{"Instances": [{"InstanceId": "i-abc"}], "OwnerId": "123"},
|
||||
)
|
||||
|
||||
runtime = _make_runtime(stubber, ec2_client, config=config)
|
||||
runtime.launch_spot("slot001", "")
|
||||
runtime.launch_spot("slot002", "")
|
||||
stubber.assert_no_pending_responses()
|
||||
|
||||
|
||||
class TestDescribeInstance:
|
||||
def test_normalizes_response(self):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
launch_time = datetime(2026, 1, 15, 12, 30, 0, tzinfo=UTC)
|
||||
response = {
|
||||
"Reservations": [
|
||||
{
|
||||
"Instances": [
|
||||
{
|
||||
"InstanceId": "i-running1",
|
||||
"State": {"Code": 16, "Name": "running"},
|
||||
"LaunchTime": launch_time,
|
||||
"Tags": [
|
||||
{"Key": "AutoscalerSlot", "Value": "slot001"},
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
stubber.add_response(
|
||||
"describe_instances",
|
||||
response,
|
||||
{"InstanceIds": ["i-running1"]},
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
info = runtime.describe_instance("i-running1")
|
||||
assert info["state"] == "running"
|
||||
assert info["tailscale_ip"] is None
|
||||
assert info["launch_time"] == launch_time.isoformat()
|
||||
|
||||
@patch.object(
|
||||
EC2Runtime,
|
||||
"_read_tailscale_status",
|
||||
return_value={
|
||||
"Peer": {
|
||||
"peer1": {
|
||||
"HostName": "nix-builder-slot001-i-running1",
|
||||
"Online": True,
|
||||
"TailscaleIPs": ["100.64.0.10"],
|
||||
}
|
||||
}
|
||||
},
|
||||
)
|
||||
def test_discovers_tailscale_ip_from_localapi(self, _mock_status):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
launch_time = datetime(2026, 1, 15, 12, 30, 0, tzinfo=UTC)
|
||||
response = {
|
||||
"Reservations": [
|
||||
{
|
||||
"Instances": [
|
||||
{
|
||||
"InstanceId": "i-running1",
|
||||
"State": {"Code": 16, "Name": "running"},
|
||||
"LaunchTime": launch_time,
|
||||
"Tags": [{"Key": "AutoscalerSlot", "Value": "slot001"}],
|
||||
}
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
stubber.add_response(
|
||||
"describe_instances",
|
||||
response,
|
||||
{"InstanceIds": ["i-running1"]},
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
info = runtime.describe_instance("i-running1")
|
||||
assert info["tailscale_ip"] == "100.64.0.10"
|
||||
|
||||
@patch.object(EC2Runtime, "_read_tailscale_status", return_value={"Peer": {}})
|
||||
def test_discovery_unavailable_returns_none(self, _mock_status):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
launch_time = datetime(2026, 1, 15, 12, 30, 0, tzinfo=UTC)
|
||||
response = {
|
||||
"Reservations": [
|
||||
{
|
||||
"Instances": [
|
||||
{
|
||||
"InstanceId": "i-running1",
|
||||
"State": {"Code": 16, "Name": "running"},
|
||||
"LaunchTime": launch_time,
|
||||
"Tags": [{"Key": "AutoscalerSlot", "Value": "slot001"}],
|
||||
}
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
stubber.add_response(
|
||||
"describe_instances",
|
||||
response,
|
||||
{"InstanceIds": ["i-running1"]},
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
info = runtime.describe_instance("i-running1")
|
||||
assert info["tailscale_ip"] is None
|
||||
|
||||
@patch.object(
|
||||
EC2Runtime,
|
||||
"_read_tailscale_status",
|
||||
return_value={
|
||||
"Peer": {
|
||||
"peer1": {
|
||||
"HostName": "nix-builder-slot001-old",
|
||||
"Online": True,
|
||||
"TailscaleIPs": ["100.64.0.10"],
|
||||
},
|
||||
"peer2": {
|
||||
"HostName": "nix-builder-slot001-new",
|
||||
"Online": True,
|
||||
"TailscaleIPs": ["100.64.0.11"],
|
||||
},
|
||||
}
|
||||
},
|
||||
)
|
||||
def test_ambiguous_slot_match_returns_none(self, _mock_status):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
launch_time = datetime(2026, 1, 15, 12, 30, 0, tzinfo=UTC)
|
||||
response = {
|
||||
"Reservations": [
|
||||
{
|
||||
"Instances": [
|
||||
{
|
||||
"InstanceId": "i-running1",
|
||||
"State": {"Code": 16, "Name": "running"},
|
||||
"LaunchTime": launch_time,
|
||||
"Tags": [{"Key": "AutoscalerSlot", "Value": "slot001"}],
|
||||
}
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
stubber.add_response(
|
||||
"describe_instances",
|
||||
response,
|
||||
{"InstanceIds": ["i-running1"]},
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
info = runtime.describe_instance("i-running1")
|
||||
assert info["tailscale_ip"] is None
|
||||
|
||||
def test_localapi_permission_error_returns_none(self):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
runtime = EC2Runtime(_make_config(), _client=ec2_client)
|
||||
|
||||
with patch(
|
||||
"nix_builder_autoscaler.runtime.ec2._UnixSocketHTTPConnection.connect",
|
||||
side_effect=PermissionError,
|
||||
):
|
||||
assert runtime._read_tailscale_status() is None
|
||||
|
||||
def test_missing_instance_returns_terminated(self):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
stubber.add_response(
|
||||
"describe_instances",
|
||||
{"Reservations": []},
|
||||
{"InstanceIds": ["i-gone"]},
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
info = runtime.describe_instance("i-gone")
|
||||
assert info["state"] == "terminated"
|
||||
assert info["tailscale_ip"] is None
|
||||
assert info["launch_time"] is None
|
||||
|
||||
|
||||
class TestListManagedInstances:
|
||||
def test_filters_by_tag(self):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
expected_params = {
|
||||
"Filters": [
|
||||
{"Name": "tag:ManagedBy", "Values": ["nix-builder-autoscaler"]},
|
||||
{
|
||||
"Name": "instance-state-name",
|
||||
"Values": ["pending", "running", "shutting-down", "stopping"],
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
response = {
|
||||
"Reservations": [
|
||||
{
|
||||
"Instances": [
|
||||
{
|
||||
"InstanceId": "i-aaa",
|
||||
"State": {"Code": 16, "Name": "running"},
|
||||
"Tags": [
|
||||
{"Key": "AutoscalerSlot", "Value": "slot001"},
|
||||
{"Key": "ManagedBy", "Value": "nix-builder-autoscaler"},
|
||||
],
|
||||
},
|
||||
{
|
||||
"InstanceId": "i-bbb",
|
||||
"State": {"Code": 0, "Name": "pending"},
|
||||
"Tags": [
|
||||
{"Key": "AutoscalerSlot", "Value": "slot002"},
|
||||
{"Key": "ManagedBy", "Value": "nix-builder-autoscaler"},
|
||||
],
|
||||
},
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
stubber.add_response("describe_instances", response, expected_params)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
managed = runtime.list_managed_instances()
|
||||
assert len(managed) == 2
|
||||
assert managed[0]["instance_id"] == "i-aaa"
|
||||
assert managed[0]["state"] == "running"
|
||||
assert managed[0]["slot_id"] == "slot001"
|
||||
assert managed[1]["instance_id"] == "i-bbb"
|
||||
assert managed[1]["state"] == "pending"
|
||||
assert managed[1]["slot_id"] == "slot002"
|
||||
|
||||
|
||||
class TestTerminateInstance:
|
||||
def test_calls_terminate_api(self):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
response = {
|
||||
"TerminatingInstances": [
|
||||
{
|
||||
"InstanceId": "i-kill",
|
||||
"CurrentState": {"Code": 32, "Name": "shutting-down"},
|
||||
"PreviousState": {"Code": 16, "Name": "running"},
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
stubber.add_response(
|
||||
"terminate_instances",
|
||||
response,
|
||||
{"InstanceIds": ["i-kill"]},
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
# Should not raise
|
||||
runtime.terminate_instance("i-kill")
|
||||
stubber.assert_no_pending_responses()
|
||||
|
||||
|
||||
class TestErrorClassification:
|
||||
def test_insufficient_capacity_classified(self):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
stubber.add_client_error(
|
||||
"run_instances",
|
||||
service_error_code="InsufficientInstanceCapacity",
|
||||
service_message="Insufficient capacity",
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
with pytest.raises(RuntimeAdapterError) as exc_info:
|
||||
runtime.launch_spot("slot001", "#!/bin/bash")
|
||||
assert exc_info.value.category == "capacity_unavailable"
|
||||
|
||||
@patch("nix_builder_autoscaler.runtime.ec2.time.sleep")
|
||||
def test_request_limit_exceeded_retried(self, mock_sleep):
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
# First call: throttled
|
||||
stubber.add_client_error(
|
||||
"run_instances",
|
||||
service_error_code="RequestLimitExceeded",
|
||||
service_message="Rate exceeded",
|
||||
)
|
||||
# Second call: success
|
||||
stubber.add_response(
|
||||
"run_instances",
|
||||
{"Instances": [{"InstanceId": "i-retry123"}], "OwnerId": "123"},
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
iid = runtime.launch_spot("slot001", "#!/bin/bash")
|
||||
assert iid == "i-retry123"
|
||||
assert mock_sleep.called
|
||||
stubber.assert_no_pending_responses()
|
||||
|
||||
@patch("nix_builder_autoscaler.runtime.ec2.time.sleep")
|
||||
def test_request_limit_exceeded_exhausted(self, mock_sleep):
|
||||
"""After max retries, RequestLimitExceeded raises with 'throttled' category."""
|
||||
ec2_client = boto3.client("ec2", region_name="us-east-1")
|
||||
stubber = Stubber(ec2_client)
|
||||
|
||||
# 4 errors (max_retries=3: attempt 0,1,2,3 all fail)
|
||||
for _ in range(4):
|
||||
stubber.add_client_error(
|
||||
"run_instances",
|
||||
service_error_code="RequestLimitExceeded",
|
||||
service_message="Rate exceeded",
|
||||
)
|
||||
runtime = _make_runtime(stubber, ec2_client)
|
||||
|
||||
with pytest.raises(RuntimeAdapterError) as exc_info:
|
||||
runtime.launch_spot("slot001", "#!/bin/bash")
|
||||
assert exc_info.value.category == "throttled"
|
||||
116
agent/nix_builder_autoscaler/tests/test_runtime_fake.py
Normal file
116
agent/nix_builder_autoscaler/tests/test_runtime_fake.py
Normal file
|
|
@ -0,0 +1,116 @@
|
|||
"""Unit tests for the FakeRuntime adapter."""
|
||||
|
||||
import contextlib
|
||||
|
||||
from nix_builder_autoscaler.runtime.base import RuntimeError as RuntimeAdapterError
|
||||
from nix_builder_autoscaler.runtime.fake import FakeRuntime
|
||||
|
||||
|
||||
class TestLaunchSpot:
|
||||
def test_returns_synthetic_instance_id(self):
|
||||
rt = FakeRuntime()
|
||||
iid = rt.launch_spot("slot001", "#!/bin/bash\necho hello")
|
||||
assert iid.startswith("i-fake-")
|
||||
assert len(iid) > 10
|
||||
|
||||
def test_instance_starts_pending(self):
|
||||
rt = FakeRuntime()
|
||||
iid = rt.launch_spot("slot001", "")
|
||||
info = rt.describe_instance(iid)
|
||||
assert info["state"] == "pending"
|
||||
assert info["tailscale_ip"] is None
|
||||
|
||||
|
||||
class TestTickProgression:
|
||||
def test_transitions_to_running_after_configured_ticks(self):
|
||||
rt = FakeRuntime(launch_latency_ticks=3, ip_delay_ticks=1)
|
||||
iid = rt.launch_spot("slot001", "")
|
||||
|
||||
for _ in range(2):
|
||||
rt.tick()
|
||||
assert rt.describe_instance(iid)["state"] == "pending"
|
||||
|
||||
rt.tick() # tick 3
|
||||
assert rt.describe_instance(iid)["state"] == "running"
|
||||
|
||||
def test_tailscale_ip_appears_after_configured_delay(self):
|
||||
rt = FakeRuntime(launch_latency_ticks=2, ip_delay_ticks=2)
|
||||
iid = rt.launch_spot("slot001", "")
|
||||
|
||||
for _ in range(2):
|
||||
rt.tick()
|
||||
assert rt.describe_instance(iid)["state"] == "running"
|
||||
assert rt.describe_instance(iid)["tailscale_ip"] is None
|
||||
|
||||
rt.tick() # tick 3 — still no IP (need tick 4)
|
||||
assert rt.describe_instance(iid)["tailscale_ip"] is None
|
||||
|
||||
rt.tick() # tick 4
|
||||
info = rt.describe_instance(iid)
|
||||
assert info["tailscale_ip"] is not None
|
||||
assert info["tailscale_ip"].startswith("100.64.0.")
|
||||
|
||||
|
||||
class TestInjectedFailure:
|
||||
def test_launch_failure_raises(self):
|
||||
rt = FakeRuntime()
|
||||
rt.inject_launch_failure("slot001")
|
||||
try:
|
||||
rt.launch_spot("slot001", "")
|
||||
raise AssertionError("Should have raised")
|
||||
except RuntimeAdapterError as e:
|
||||
assert e.category == "capacity_unavailable"
|
||||
|
||||
def test_failure_is_one_shot(self):
|
||||
rt = FakeRuntime()
|
||||
rt.inject_launch_failure("slot001")
|
||||
with contextlib.suppress(RuntimeAdapterError):
|
||||
rt.launch_spot("slot001", "")
|
||||
# Second call should succeed
|
||||
iid = rt.launch_spot("slot001", "")
|
||||
assert iid.startswith("i-fake-")
|
||||
|
||||
|
||||
class TestInjectedInterruption:
|
||||
def test_interruption_returns_terminated(self):
|
||||
rt = FakeRuntime(launch_latency_ticks=1)
|
||||
iid = rt.launch_spot("slot001", "")
|
||||
rt.tick()
|
||||
assert rt.describe_instance(iid)["state"] == "running"
|
||||
|
||||
rt.inject_interruption(iid)
|
||||
info = rt.describe_instance(iid)
|
||||
assert info["state"] == "terminated"
|
||||
|
||||
def test_interruption_is_one_shot(self):
|
||||
"""After the interruption fires, subsequent describes stay terminated."""
|
||||
rt = FakeRuntime(launch_latency_ticks=1)
|
||||
iid = rt.launch_spot("slot001", "")
|
||||
rt.tick()
|
||||
rt.inject_interruption(iid)
|
||||
rt.describe_instance(iid) # consumes the injection
|
||||
info = rt.describe_instance(iid)
|
||||
assert info["state"] == "terminated"
|
||||
|
||||
|
||||
class TestTerminate:
|
||||
def test_terminate_marks_instance(self):
|
||||
rt = FakeRuntime(launch_latency_ticks=1)
|
||||
iid = rt.launch_spot("slot001", "")
|
||||
rt.tick()
|
||||
rt.terminate_instance(iid)
|
||||
assert rt.describe_instance(iid)["state"] == "terminated"
|
||||
|
||||
|
||||
class TestListManaged:
|
||||
def test_lists_non_terminated(self):
|
||||
rt = FakeRuntime(launch_latency_ticks=1)
|
||||
iid1 = rt.launch_spot("slot001", "")
|
||||
iid2 = rt.launch_spot("slot002", "")
|
||||
rt.tick()
|
||||
rt.terminate_instance(iid1)
|
||||
|
||||
managed = rt.list_managed_instances()
|
||||
ids = [m["instance_id"] for m in managed]
|
||||
assert iid2 in ids
|
||||
assert iid1 not in ids
|
||||
194
agent/nix_builder_autoscaler/tests/test_scheduler.py
Normal file
194
agent/nix_builder_autoscaler/tests/test_scheduler.py
Normal file
|
|
@ -0,0 +1,194 @@
|
|||
"""Scheduler unit tests — Plan 03."""
|
||||
|
||||
from nix_builder_autoscaler.config import AppConfig, AwsConfig, CapacityConfig
|
||||
from nix_builder_autoscaler.metrics import MetricsRegistry
|
||||
from nix_builder_autoscaler.models import ReservationPhase, SlotState
|
||||
from nix_builder_autoscaler.providers.clock import FakeClock
|
||||
from nix_builder_autoscaler.runtime.fake import FakeRuntime
|
||||
from nix_builder_autoscaler.scheduler import scheduling_tick
|
||||
from nix_builder_autoscaler.state_db import StateDB
|
||||
|
||||
|
||||
def _make_env(
|
||||
slot_count=3,
|
||||
max_slots=3,
|
||||
max_leases=1,
|
||||
idle_scale_down_seconds=900,
|
||||
target_warm=0,
|
||||
min_slots=0,
|
||||
):
|
||||
clock = FakeClock()
|
||||
db = StateDB(":memory:", clock=clock)
|
||||
db.init_schema()
|
||||
db.init_slots("slot", slot_count, "x86_64-linux", "all")
|
||||
runtime = FakeRuntime(launch_latency_ticks=2, ip_delay_ticks=1)
|
||||
config = AppConfig(
|
||||
capacity=CapacityConfig(
|
||||
max_slots=max_slots,
|
||||
max_leases_per_slot=max_leases,
|
||||
idle_scale_down_seconds=idle_scale_down_seconds,
|
||||
target_warm_slots=target_warm,
|
||||
min_slots=min_slots,
|
||||
reservation_ttl_seconds=1200,
|
||||
),
|
||||
aws=AwsConfig(region="us-east-1"),
|
||||
)
|
||||
metrics = MetricsRegistry()
|
||||
return db, runtime, config, clock, metrics
|
||||
|
||||
|
||||
def _make_slot_ready(db, slot_id, instance_id="i-test1", ip="100.64.0.1"):
|
||||
"""Transition a slot through the full state machine to ready."""
|
||||
db.update_slot_state(slot_id, SlotState.LAUNCHING, instance_id=instance_id)
|
||||
db.update_slot_state(slot_id, SlotState.BOOTING)
|
||||
db.update_slot_state(slot_id, SlotState.BINDING, instance_ip=ip)
|
||||
db.update_slot_state(slot_id, SlotState.READY)
|
||||
|
||||
|
||||
# --- Test cases ---
|
||||
|
||||
|
||||
def test_pending_reservation_assigned_to_ready_slot():
|
||||
db, runtime, config, clock, metrics = _make_env()
|
||||
_make_slot_ready(db, "slot001")
|
||||
|
||||
resv = db.create_reservation("x86_64-linux", "test", None, 1200)
|
||||
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
updated = db.get_reservation(resv["reservation_id"])
|
||||
assert updated["phase"] == ReservationPhase.READY.value
|
||||
assert updated["slot_id"] == "slot001"
|
||||
assert updated["instance_id"] == "i-test1"
|
||||
|
||||
slot = db.get_slot("slot001")
|
||||
assert slot["lease_count"] == 1
|
||||
|
||||
|
||||
def test_two_pending_one_slot_only_one_assigned_per_tick():
|
||||
db, runtime, config, clock, metrics = _make_env(max_leases=1)
|
||||
_make_slot_ready(db, "slot001")
|
||||
|
||||
r1 = db.create_reservation("x86_64-linux", "test1", None, 1200)
|
||||
r2 = db.create_reservation("x86_64-linux", "test2", None, 1200)
|
||||
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
u1 = db.get_reservation(r1["reservation_id"])
|
||||
u2 = db.get_reservation(r2["reservation_id"])
|
||||
|
||||
ready_count = sum(1 for r in [u1, u2] if r["phase"] == ReservationPhase.READY.value)
|
||||
pending_count = sum(1 for r in [u1, u2] if r["phase"] == ReservationPhase.PENDING.value)
|
||||
assert ready_count == 1
|
||||
assert pending_count == 1
|
||||
|
||||
slot = db.get_slot("slot001")
|
||||
assert slot["lease_count"] == 1
|
||||
|
||||
|
||||
def test_reservation_expires_when_ttl_passes():
|
||||
db, runtime, config, clock, metrics = _make_env()
|
||||
config.capacity.reservation_ttl_seconds = 60
|
||||
|
||||
db.create_reservation("x86_64-linux", "test", None, 60)
|
||||
|
||||
clock.advance(61)
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
reservations = db.list_reservations(ReservationPhase.EXPIRED)
|
||||
assert len(reservations) == 1
|
||||
|
||||
|
||||
def test_scale_down_starts_when_idle_exceeds_threshold():
|
||||
db, runtime, config, clock, metrics = _make_env(idle_scale_down_seconds=900)
|
||||
_make_slot_ready(db, "slot001")
|
||||
|
||||
clock.advance(901)
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
slot = db.get_slot("slot001")
|
||||
assert slot["state"] == SlotState.DRAINING.value
|
||||
|
||||
|
||||
def test_slot_does_not_drain_while_lease_count_positive():
|
||||
db, runtime, config, clock, metrics = _make_env(idle_scale_down_seconds=900)
|
||||
_make_slot_ready(db, "slot001")
|
||||
|
||||
resv = db.create_reservation("x86_64-linux", "test", None, 1200)
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
# Confirm assigned
|
||||
updated = db.get_reservation(resv["reservation_id"])
|
||||
assert updated["phase"] == ReservationPhase.READY.value
|
||||
|
||||
clock.advance(901)
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
slot = db.get_slot("slot001")
|
||||
assert slot["state"] == SlotState.READY.value
|
||||
|
||||
|
||||
def test_interruption_pending_slot_moves_to_draining():
|
||||
db, runtime, config, clock, metrics = _make_env()
|
||||
_make_slot_ready(db, "slot001")
|
||||
|
||||
db.update_slot_fields("slot001", interruption_pending=1)
|
||||
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
slot = db.get_slot("slot001")
|
||||
assert slot["state"] == SlotState.DRAINING.value
|
||||
assert slot["interruption_pending"] == 0
|
||||
|
||||
|
||||
def test_launch_triggered_for_unmet_demand():
|
||||
db, runtime, config, clock, metrics = _make_env()
|
||||
|
||||
db.create_reservation("x86_64-linux", "test", None, 1200)
|
||||
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
launching = db.list_slots(SlotState.LAUNCHING)
|
||||
assert len(launching) == 1
|
||||
assert launching[0]["instance_id"] is not None
|
||||
|
||||
# FakeRuntime should have one pending instance
|
||||
managed = runtime.list_managed_instances()
|
||||
assert len(managed) == 1
|
||||
|
||||
|
||||
def test_launch_respects_max_slots():
|
||||
db, runtime, config, clock, metrics = _make_env(max_slots=1)
|
||||
_make_slot_ready(db, "slot001")
|
||||
|
||||
# Slot001 is at capacity (lease_count will be 1 after assignment)
|
||||
db.create_reservation("x86_64-linux", "test1", None, 1200)
|
||||
db.create_reservation("x86_64-linux", "test2", None, 1200)
|
||||
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
# One reservation assigned, one still pending — but no new launch
|
||||
# because active_slots (1) == max_slots (1)
|
||||
launching = db.list_slots(SlotState.LAUNCHING)
|
||||
assert len(launching) == 0
|
||||
|
||||
|
||||
def test_min_slots_maintained():
|
||||
db, runtime, config, clock, metrics = _make_env(min_slots=1)
|
||||
|
||||
# No reservations, all slots empty
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
launching = db.list_slots(SlotState.LAUNCHING)
|
||||
assert len(launching) == 1
|
||||
|
||||
|
||||
def test_scale_down_respects_min_slots():
|
||||
db, runtime, config, clock, metrics = _make_env(min_slots=1, idle_scale_down_seconds=900)
|
||||
_make_slot_ready(db, "slot001")
|
||||
|
||||
clock.advance(901)
|
||||
scheduling_tick(db, runtime, config, clock, metrics)
|
||||
|
||||
slot = db.get_slot("slot001")
|
||||
assert slot["state"] == SlotState.READY.value
|
||||
46
agent/pyproject.toml
Normal file
46
agent/pyproject.toml
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
[build-system]
|
||||
requires = ["setuptools>=68", "wheel"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "nix-builder-autoscaler"
|
||||
version = "0.1.0"
|
||||
description = "Autoscaler daemon for Nix remote builders on EC2 Spot instances"
|
||||
requires-python = ">=3.12"
|
||||
dependencies = [
|
||||
"boto3",
|
||||
"fastapi",
|
||||
"uvicorn[standard]",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
autoscalerctl = "nix_builder_autoscaler.cli:main"
|
||||
|
||||
[dependency-groups]
|
||||
dev = [
|
||||
"ruff",
|
||||
"pyright",
|
||||
"pytest",
|
||||
"httpx",
|
||||
"botocore",
|
||||
]
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py312"
|
||||
line-length = 100
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "UP", "B", "SIM", "ANN"]
|
||||
ignore = ["ANN401"]
|
||||
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"*/tests/*" = ["ANN"]
|
||||
|
||||
[tool.pyright]
|
||||
pythonVersion = "3.12"
|
||||
typeCheckingMode = "standard"
|
||||
include = ["nix_builder_autoscaler"]
|
||||
exclude = ["**/tests"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["nix_builder_autoscaler/tests"]
|
||||
714
agent/uv.lock
generated
Normal file
714
agent/uv.lock
generated
Normal file
|
|
@ -0,0 +1,714 @@
|
|||
version = 1
|
||||
revision = 3
|
||||
requires-python = ">=3.12"
|
||||
|
||||
[[package]]
|
||||
name = "annotated-doc"
|
||||
version = "0.0.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/57/ba/046ceea27344560984e26a590f90bc7f4a75b06701f653222458922b558c/annotated_doc-0.0.4.tar.gz", hash = "sha256:fbcda96e87e9c92ad167c2e53839e57503ecfda18804ea28102353485033faa4", size = 7288, upload-time = "2025-11-10T22:07:42.062Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/d3/26bf1008eb3d2daa8ef4cacc7f3bfdc11818d111f7e2d0201bc6e3b49d45/annotated_doc-0.0.4-py3-none-any.whl", hash = "sha256:571ac1dc6991c450b25a9c2d84a3705e2ae7a53467b5d111c24fa8baabbed320", size = 5303, upload-time = "2025-11-10T22:07:40.673Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
version = "0.7.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081, upload-time = "2024-05-20T21:33:25.928Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643, upload-time = "2024-05-20T21:33:24.1Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anyio"
|
||||
version = "4.12.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "idna" },
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/96/f0/5eb65b2bb0d09ac6776f2eb54adee6abe8228ea05b20a5ad0e4945de8aac/anyio-4.12.1.tar.gz", hash = "sha256:41cfcc3a4c85d3f05c932da7c26d0201ac36f72abd4435ba90d0464a3ffed703", size = 228685, upload-time = "2026-01-06T11:45:21.246Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/38/0e/27be9fdef66e72d64c0cdc3cc2823101b80585f8119b5c112c2e8f5f7dab/anyio-4.12.1-py3-none-any.whl", hash = "sha256:d405828884fc140aa80a3c667b8beed277f1dfedec42ba031bd6ac3db606ab6c", size = 113592, upload-time = "2026-01-06T11:45:19.497Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "boto3"
|
||||
version = "1.42.58"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "botocore" },
|
||||
{ name = "jmespath" },
|
||||
{ name = "s3transfer" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b9/35/02f91308eed91fb8351809e8319c204dce7672e8bb297395ed44395b7b97/boto3-1.42.58.tar.gz", hash = "sha256:3a21b5bbc8bf8d6472a7ae7bdc77819b1f86f35d127f428f4603bed1b98122c0", size = 112775, upload-time = "2026-02-26T20:25:21.535Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/13/47/3a5b53628311fef4a2cec5c04ff750376ecaac0e9eb7fbea1fa8a88ec198/boto3-1.42.58-py3-none-any.whl", hash = "sha256:1bc5ff0b7a1a3f42b115481e269e1aada1d68bbfa80a989ac2882d51072907a3", size = 140556, upload-time = "2026-02-26T20:25:18.543Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "botocore"
|
||||
version = "1.42.58"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "jmespath" },
|
||||
{ name = "python-dateutil" },
|
||||
{ name = "urllib3" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/23/f4/9466eee955c62af0430c0c608a50d460d017fb4609b29eba84c6473d04c6/botocore-1.42.58.tar.gz", hash = "sha256:55224d6a91afae0997e8bee62d1ef1ae2dcbc6c210516939b32a774b0b35bec5", size = 14942809, upload-time = "2026-02-26T20:25:07.805Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/e0/f957ed6434f922ceffddba6db308b23d1ec2206beacb166cb83a75c5af61/botocore-1.42.58-py3-none-any.whl", hash = "sha256:3098178f4404cf85c8997ebb7948b3f267cff1dd191b08fc4ebb614ac1013a20", size = 14616050, upload-time = "2026-02-26T20:25:02.609Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "certifi"
|
||||
version = "2026.2.25"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/af/2d/7bf41579a8986e348fa033a31cdd0e4121114f6bce2457e8876010b092dd/certifi-2026.2.25.tar.gz", hash = "sha256:e887ab5cee78ea814d3472169153c2d12cd43b14bd03329a39a9c6e2e80bfba7", size = 155029, upload-time = "2026-02-25T02:54:17.342Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/3c/c17fb3ca2d9c3acff52e30b309f538586f9f5b9c9cf454f3845fc9af4881/certifi-2026.2.25-py3-none-any.whl", hash = "sha256:027692e4402ad994f1c42e52a4997a9763c646b73e4096e4d5d6db8af1d6f0fa", size = 153684, upload-time = "2026-02-25T02:54:15.766Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.3.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/3d/fa/656b739db8587d7b5dfa22e22ed02566950fbfbcdc20311993483657a5c0/click-8.3.1.tar.gz", hash = "sha256:12ff4785d337a1bb490bb7e9c2b1ee5da3112e94a8622f26a6c77f5d2fc6842a", size = 295065, upload-time = "2025-11-15T20:45:42.706Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/98/78/01c019cdb5d6498122777c1a43056ebb3ebfeef2076d9d026bfe15583b2b/click-8.3.1-py3-none-any.whl", hash = "sha256:981153a64e25f12d547d3426c367a4857371575ee7ad18df2a6183ab0545b2a6", size = 108274, upload-time = "2025-11-15T20:45:41.139Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "colorama"
|
||||
version = "0.4.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fastapi"
|
||||
version = "0.133.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "annotated-doc" },
|
||||
{ name = "pydantic" },
|
||||
{ name = "starlette" },
|
||||
{ name = "typing-extensions" },
|
||||
{ name = "typing-inspection" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/22/6f/0eafed8349eea1fa462238b54a624c8b408cd1ba2795c8e64aa6c34f8ab7/fastapi-0.133.1.tar.gz", hash = "sha256:ed152a45912f102592976fde6cbce7dae1a8a1053da94202e51dd35d184fadd6", size = 378741, upload-time = "2026-02-25T18:18:17.398Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/c9/a175a7779f3599dfa4adfc97a6ce0e157237b3d7941538604aadaf97bfb6/fastapi-0.133.1-py3-none-any.whl", hash = "sha256:658f34ba334605b1617a65adf2ea6461901bdb9af3a3080d63ff791ecf7dc2e2", size = 109029, upload-time = "2026-02-25T18:18:18.578Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "h11"
|
||||
version = "0.16.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/01/ee/02a2c011bdab74c6fb3c75474d40b3052059d95df7e73351460c8588d963/h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1", size = 101250, upload-time = "2025-04-24T03:35:25.427Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload-time = "2025-04-24T03:35:24.344Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httpcore"
|
||||
version = "1.0.9"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "certifi" },
|
||||
{ name = "h11" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/06/94/82699a10bca87a5556c9c59b5963f2d039dbd239f25bc2a63907a05a14cb/httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8", size = 85484, upload-time = "2025-04-24T22:06:22.219Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/f5/f66802a942d491edb555dd61e3a9961140fd64c90bce1eafd741609d334d/httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55", size = 78784, upload-time = "2025-04-24T22:06:20.566Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httptools"
|
||||
version = "0.7.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b5/46/120a669232c7bdedb9d52d4aeae7e6c7dfe151e99dc70802e2fc7a5e1993/httptools-0.7.1.tar.gz", hash = "sha256:abd72556974f8e7c74a259655924a717a2365b236c882c3f6f8a45fe94703ac9", size = 258961, upload-time = "2025-10-10T03:55:08.559Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/53/7f/403e5d787dc4942316e515e949b0c8a013d84078a915910e9f391ba9b3ed/httptools-0.7.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:38e0c83a2ea9746ebbd643bdfb521b9aa4a91703e2cd705c20443405d2fd16a5", size = 206280, upload-time = "2025-10-10T03:54:39.274Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/0d/7f3fd28e2ce311ccc998c388dd1c53b18120fda3b70ebb022b135dc9839b/httptools-0.7.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f25bbaf1235e27704f1a7b86cd3304eabc04f569c828101d94a0e605ef7205a5", size = 110004, upload-time = "2025-10-10T03:54:40.403Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/84/a6/b3965e1e146ef5762870bbe76117876ceba51a201e18cc31f5703e454596/httptools-0.7.1-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2c15f37ef679ab9ecc06bfc4e6e8628c32a8e4b305459de7cf6785acd57e4d03", size = 517655, upload-time = "2025-10-10T03:54:41.347Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/7d/71fee6f1844e6fa378f2eddde6c3e41ce3a1fb4b2d81118dd544e3441ec0/httptools-0.7.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7fe6e96090df46b36ccfaf746f03034e5ab723162bc51b0a4cf58305324036f2", size = 511440, upload-time = "2025-10-10T03:54:42.452Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/a5/079d216712a4f3ffa24af4a0381b108aa9c45b7a5cc6eb141f81726b1823/httptools-0.7.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f72fdbae2dbc6e68b8239defb48e6a5937b12218e6ffc2c7846cc37befa84362", size = 495186, upload-time = "2025-10-10T03:54:43.937Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/9e/025ad7b65278745dee3bd0ebf9314934c4592560878308a6121f7f812084/httptools-0.7.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e99c7b90a29fd82fea9ef57943d501a16f3404d7b9ee81799d41639bdaae412c", size = 499192, upload-time = "2025-10-10T03:54:45.003Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/de/40a8f202b987d43afc4d54689600ff03ce65680ede2f31df348d7f368b8f/httptools-0.7.1-cp312-cp312-win_amd64.whl", hash = "sha256:3e14f530fefa7499334a79b0cf7e7cd2992870eb893526fb097d51b4f2d0f321", size = 86694, upload-time = "2025-10-10T03:54:45.923Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/8f/c77b1fcbfd262d422f12da02feb0d218fa228d52485b77b953832105bb90/httptools-0.7.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6babce6cfa2a99545c60bfef8bee0cc0545413cb0018f617c8059a30ad985de3", size = 202889, upload-time = "2025-10-10T03:54:47.089Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/1a/22887f53602feaa066354867bc49a68fc295c2293433177ee90870a7d517/httptools-0.7.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:601b7628de7504077dd3dcb3791c6b8694bbd967148a6d1f01806509254fb1ca", size = 108180, upload-time = "2025-10-10T03:54:48.052Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/32/6a/6aaa91937f0010d288d3d124ca2946d48d60c3a5ee7ca62afe870e3ea011/httptools-0.7.1-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:04c6c0e6c5fb0739c5b8a9eb046d298650a0ff38cf42537fc372b28dc7e4472c", size = 478596, upload-time = "2025-10-10T03:54:48.919Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/70/023d7ce117993107be88d2cbca566a7c1323ccbaf0af7eabf2064fe356f6/httptools-0.7.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:69d4f9705c405ae3ee83d6a12283dc9feba8cc6aaec671b412917e644ab4fa66", size = 473268, upload-time = "2025-10-10T03:54:49.993Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/32/4d/9dd616c38da088e3f436e9a616e1d0cc66544b8cdac405cc4e81c8679fc7/httptools-0.7.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:44c8f4347d4b31269c8a9205d8a5ee2df5322b09bbbd30f8f862185bb6b05346", size = 455517, upload-time = "2025-10-10T03:54:51.066Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1d/3a/a6c595c310b7df958e739aae88724e24f9246a514d909547778d776799be/httptools-0.7.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:465275d76db4d554918aba40bf1cbebe324670f3dfc979eaffaa5d108e2ed650", size = 458337, upload-time = "2025-10-10T03:54:52.196Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/82/88e8d6d2c51edc1cc391b6e044c6c435b6aebe97b1abc33db1b0b24cd582/httptools-0.7.1-cp313-cp313-win_amd64.whl", hash = "sha256:322d00c2068d125bd570f7bf78b2d367dad02b919d8581d7476d8b75b294e3e6", size = 85743, upload-time = "2025-10-10T03:54:53.448Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/50/9d095fcbb6de2d523e027a2f304d4551855c2f46e0b82befd718b8b20056/httptools-0.7.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:c08fe65728b8d70b6923ce31e3956f859d5e1e8548e6f22ec520a962c6757270", size = 203619, upload-time = "2025-10-10T03:54:54.321Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/f0/89720dc5139ae54b03f861b5e2c55a37dba9a5da7d51e1e824a1f343627f/httptools-0.7.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:7aea2e3c3953521c3c51106ee11487a910d45586e351202474d45472db7d72d3", size = 108714, upload-time = "2025-10-10T03:54:55.163Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/cb/eea88506f191fb552c11787c23f9a405f4c7b0c5799bf73f2249cd4f5228/httptools-0.7.1-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:0e68b8582f4ea9166be62926077a3334064d422cf08ab87d8b74664f8e9058e1", size = 472909, upload-time = "2025-10-10T03:54:56.056Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/4a/a548bdfae6369c0d078bab5769f7b66f17f1bfaa6fa28f81d6be6959066b/httptools-0.7.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:df091cf961a3be783d6aebae963cc9b71e00d57fa6f149025075217bc6a55a7b", size = 470831, upload-time = "2025-10-10T03:54:57.219Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4d/31/14df99e1c43bd132eec921c2e7e11cda7852f65619bc0fc5bdc2d0cb126c/httptools-0.7.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:f084813239e1eb403ddacd06a30de3d3e09a9b76e7894dcda2b22f8a726e9c60", size = 452631, upload-time = "2025-10-10T03:54:58.219Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/d2/b7e131f7be8d854d48cb6d048113c30f9a46dca0c9a8b08fcb3fcd588cdc/httptools-0.7.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:7347714368fb2b335e9063bc2b96f2f87a9ceffcd9758ac295f8bbcd3ffbc0ca", size = 452910, upload-time = "2025-10-10T03:54:59.366Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/53/cf/878f3b91e4e6e011eff6d1fa9ca39f7eb17d19c9d7971b04873734112f30/httptools-0.7.1-cp314-cp314-win_amd64.whl", hash = "sha256:cfabda2a5bb85aa2a904ce06d974a3f30fb36cc63d7feaddec05d2050acede96", size = 88205, upload-time = "2025-10-10T03:55:00.389Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httpx"
|
||||
version = "0.28.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
{ name = "certifi" },
|
||||
{ name = "httpcore" },
|
||||
{ name = "idna" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406, upload-time = "2024-12-06T15:37:23.222Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "idna"
|
||||
version = "3.11"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/6f/6d/0703ccc57f3a7233505399edb88de3cbd678da106337b9fcde432b65ed60/idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902", size = 194582, upload-time = "2025-10-12T14:55:20.501Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "iniconfig"
|
||||
version = "2.3.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jmespath"
|
||||
version = "1.1.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d3/59/322338183ecda247fb5d1763a6cbe46eff7222eaeebafd9fa65d4bf5cb11/jmespath-1.1.0.tar.gz", hash = "sha256:472c87d80f36026ae83c6ddd0f1d05d4e510134ed462851fd5f754c8c3cbb88d", size = 27377, upload-time = "2026-01-22T16:35:26.279Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/14/2f/967ba146e6d58cf6a652da73885f52fc68001525b4197effc174321d70b4/jmespath-1.1.0-py3-none-any.whl", hash = "sha256:a5663118de4908c91729bea0acadca56526eb2698e83de10cd116ae0f4e97c64", size = 20419, upload-time = "2026-01-22T16:35:24.919Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nix-builder-autoscaler"
|
||||
version = "0.1.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "boto3" },
|
||||
{ name = "fastapi" },
|
||||
{ name = "uvicorn", extra = ["standard"] },
|
||||
]
|
||||
|
||||
[package.dev-dependencies]
|
||||
dev = [
|
||||
{ name = "botocore" },
|
||||
{ name = "httpx" },
|
||||
{ name = "pyright" },
|
||||
{ name = "pytest" },
|
||||
{ name = "ruff" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "boto3" },
|
||||
{ name = "fastapi" },
|
||||
{ name = "uvicorn", extras = ["standard"] },
|
||||
]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
dev = [
|
||||
{ name = "botocore" },
|
||||
{ name = "httpx" },
|
||||
{ name = "pyright" },
|
||||
{ name = "pytest" },
|
||||
{ name = "ruff" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nodeenv"
|
||||
version = "1.10.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/24/bf/d1bda4f6168e0b2e9e5958945e01910052158313224ada5ce1fb2e1113b8/nodeenv-1.10.0.tar.gz", hash = "sha256:996c191ad80897d076bdfba80a41994c2b47c68e224c542b48feba42ba00f8bb", size = 55611, upload-time = "2025-12-20T14:08:54.006Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/88/b2/d0896bdcdc8d28a7fc5717c305f1a861c26e18c05047949fb371034d98bd/nodeenv-1.10.0-py2.py3-none-any.whl", hash = "sha256:5bb13e3eed2923615535339b3c620e76779af4cb4c6a90deccc9e36b274d3827", size = 23438, upload-time = "2025-12-20T14:08:52.782Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "26.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/65/ee/299d360cdc32edc7d2cf530f3accf79c4fca01e96ffc950d8a52213bd8e4/packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4", size = 143416, upload-time = "2026-01-21T20:50:39.064Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529", size = 74366, upload-time = "2026-01-21T20:50:37.788Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pluggy"
|
||||
version = "1.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pydantic"
|
||||
version = "2.12.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "annotated-types" },
|
||||
{ name = "pydantic-core" },
|
||||
{ name = "typing-extensions" },
|
||||
{ name = "typing-inspection" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/69/44/36f1a6e523abc58ae5f928898e4aca2e0ea509b5aa6f6f392a5d882be928/pydantic-2.12.5.tar.gz", hash = "sha256:4d351024c75c0f085a9febbb665ce8c0c6ec5d30e903bdb6394b7ede26aebb49", size = 821591, upload-time = "2025-11-26T15:11:46.471Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/87/b70ad306ebb6f9b585f114d0ac2137d792b48be34d732d60e597c2f8465a/pydantic-2.12.5-py3-none-any.whl", hash = "sha256:e561593fccf61e8a20fc46dfc2dfe075b8be7d0188df33f221ad1f0139180f9d", size = 463580, upload-time = "2025-11-26T15:11:44.605Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pydantic-core"
|
||||
version = "2.41.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/71/70/23b021c950c2addd24ec408e9ab05d59b035b39d97cdc1130e1bce647bb6/pydantic_core-2.41.5.tar.gz", hash = "sha256:08daa51ea16ad373ffd5e7606252cc32f07bc72b28284b6bc9c6df804816476e", size = 460952, upload-time = "2025-11-04T13:43:49.098Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/5d/5f6c63eebb5afee93bcaae4ce9a898f3373ca23df3ccaef086d0233a35a7/pydantic_core-2.41.5-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:f41a7489d32336dbf2199c8c0a215390a751c5b014c2c1c5366e817202e9cdf7", size = 2110990, upload-time = "2025-11-04T13:39:58.079Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/32/9c2e8ccb57c01111e0fd091f236c7b371c1bccea0fa85247ac55b1e2b6b6/pydantic_core-2.41.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:070259a8818988b9a84a449a2a7337c7f430a22acc0859c6b110aa7212a6d9c0", size = 1896003, upload-time = "2025-11-04T13:39:59.956Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/b8/a01b53cb0e59139fbc9e4fda3e9724ede8de279097179be4ff31f1abb65a/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e96cea19e34778f8d59fe40775a7a574d95816eb150850a85a7a4c8f4b94ac69", size = 1919200, upload-time = "2025-11-04T13:40:02.241Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/38/de/8c36b5198a29bdaade07b5985e80a233a5ac27137846f3bc2d3b40a47360/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ed2e99c456e3fadd05c991f8f437ef902e00eedf34320ba2b0842bd1c3ca3a75", size = 2052578, upload-time = "2025-11-04T13:40:04.401Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/00/b5/0e8e4b5b081eac6cb3dbb7e60a65907549a1ce035a724368c330112adfdd/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:65840751b72fbfd82c3c640cff9284545342a4f1eb1586ad0636955b261b0b05", size = 2208504, upload-time = "2025-11-04T13:40:06.072Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/77/56/87a61aad59c7c5b9dc8caad5a41a5545cba3810c3e828708b3d7404f6cef/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e536c98a7626a98feb2d3eaf75944ef6f3dbee447e1f841eae16f2f0a72d8ddc", size = 2335816, upload-time = "2025-11-04T13:40:07.835Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/76/941cc9f73529988688a665a5c0ecff1112b3d95ab48f81db5f7606f522d3/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eceb81a8d74f9267ef4081e246ffd6d129da5d87e37a77c9bde550cb04870c1c", size = 2075366, upload-time = "2025-11-04T13:40:09.804Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/43/ebef01f69baa07a482844faaa0a591bad1ef129253ffd0cdaa9d8a7f72d3/pydantic_core-2.41.5-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d38548150c39b74aeeb0ce8ee1d8e82696f4a4e16ddc6de7b1d8823f7de4b9b5", size = 2171698, upload-time = "2025-11-04T13:40:12.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/87/41f3202e4193e3bacfc2c065fab7706ebe81af46a83d3e27605029c1f5a6/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c23e27686783f60290e36827f9c626e63154b82b116d7fe9adba1fda36da706c", size = 2132603, upload-time = "2025-11-04T13:40:13.868Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/7d/4c00df99cb12070b6bccdef4a195255e6020a550d572768d92cc54dba91a/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:482c982f814460eabe1d3bb0adfdc583387bd4691ef00b90575ca0d2b6fe2294", size = 2329591, upload-time = "2025-11-04T13:40:15.672Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/6a/ebf4b1d65d458f3cda6a7335d141305dfa19bdc61140a884d165a8a1bbc7/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:bfea2a5f0b4d8d43adf9d7b8bf019fb46fdd10a2e5cde477fbcb9d1fa08c68e1", size = 2319068, upload-time = "2025-11-04T13:40:17.532Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/3b/774f2b5cd4192d5ab75870ce4381fd89cf218af999515baf07e7206753f0/pydantic_core-2.41.5-cp312-cp312-win32.whl", hash = "sha256:b74557b16e390ec12dca509bce9264c3bbd128f8a2c376eaa68003d7f327276d", size = 1985908, upload-time = "2025-11-04T13:40:19.309Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/45/00173a033c801cacf67c190fef088789394feaf88a98a7035b0e40d53dc9/pydantic_core-2.41.5-cp312-cp312-win_amd64.whl", hash = "sha256:1962293292865bca8e54702b08a4f26da73adc83dd1fcf26fbc875b35d81c815", size = 2020145, upload-time = "2025-11-04T13:40:21.548Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/22/91fbc821fa6d261b376a3f73809f907cec5ca6025642c463d3488aad22fb/pydantic_core-2.41.5-cp312-cp312-win_arm64.whl", hash = "sha256:1746d4a3d9a794cacae06a5eaaccb4b8643a131d45fbc9af23e353dc0a5ba5c3", size = 1976179, upload-time = "2025-11-04T13:40:23.393Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/87/06/8806241ff1f70d9939f9af039c6c35f2360cf16e93c2ca76f184e76b1564/pydantic_core-2.41.5-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:941103c9be18ac8daf7b7adca8228f8ed6bb7a1849020f643b3a14d15b1924d9", size = 2120403, upload-time = "2025-11-04T13:40:25.248Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/02/abfa0e0bda67faa65fef1c84971c7e45928e108fe24333c81f3bfe35d5f5/pydantic_core-2.41.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:112e305c3314f40c93998e567879e887a3160bb8689ef3d2c04b6cc62c33ac34", size = 1896206, upload-time = "2025-11-04T13:40:27.099Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/df/a4c740c0943e93e6500f9eb23f4ca7ec9bf71b19e608ae5b579678c8d02f/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cbaad15cb0c90aa221d43c00e77bb33c93e8d36e0bf74760cd00e732d10a6a0", size = 1919307, upload-time = "2025-11-04T13:40:29.806Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/e3/6324802931ae1d123528988e0e86587c2072ac2e5394b4bc2bc34b61ff6e/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:03ca43e12fab6023fc79d28ca6b39b05f794ad08ec2feccc59a339b02f2b3d33", size = 2063258, upload-time = "2025-11-04T13:40:33.544Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c9/d4/2230d7151d4957dd79c3044ea26346c148c98fbf0ee6ebd41056f2d62ab5/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dc799088c08fa04e43144b164feb0c13f9a0bc40503f8df3e9fde58a3c0c101e", size = 2214917, upload-time = "2025-11-04T13:40:35.479Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e6/9f/eaac5df17a3672fef0081b6c1bb0b82b33ee89aa5cec0d7b05f52fd4a1fa/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:97aeba56665b4c3235a0e52b2c2f5ae9cd071b8a8310ad27bddb3f7fb30e9aa2", size = 2332186, upload-time = "2025-11-04T13:40:37.436Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cf/4e/35a80cae583a37cf15604b44240e45c05e04e86f9cfd766623149297e971/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:406bf18d345822d6c21366031003612b9c77b3e29ffdb0f612367352aab7d586", size = 2073164, upload-time = "2025-11-04T13:40:40.289Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/e3/f6e262673c6140dd3305d144d032f7bd5f7497d3871c1428521f19f9efa2/pydantic_core-2.41.5-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b93590ae81f7010dbe380cdeab6f515902ebcbefe0b9327cc4804d74e93ae69d", size = 2179146, upload-time = "2025-11-04T13:40:42.809Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/c7/20bd7fc05f0c6ea2056a4565c6f36f8968c0924f19b7d97bbfea55780e73/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:01a3d0ab748ee531f4ea6c3e48ad9dac84ddba4b0d82291f87248f2f9de8d740", size = 2137788, upload-time = "2025-11-04T13:40:44.752Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3a/8d/34318ef985c45196e004bc46c6eab2eda437e744c124ef0dbe1ff2c9d06b/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:6561e94ba9dacc9c61bce40e2d6bdc3bfaa0259d3ff36ace3b1e6901936d2e3e", size = 2340133, upload-time = "2025-11-04T13:40:46.66Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9c/59/013626bf8c78a5a5d9350d12e7697d3d4de951a75565496abd40ccd46bee/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:915c3d10f81bec3a74fbd4faebe8391013ba61e5a1a8d48c4455b923bdda7858", size = 2324852, upload-time = "2025-11-04T13:40:48.575Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/d9/c248c103856f807ef70c18a4f986693a46a8ffe1602e5d361485da502d20/pydantic_core-2.41.5-cp313-cp313-win32.whl", hash = "sha256:650ae77860b45cfa6e2cdafc42618ceafab3a2d9a3811fcfbd3bbf8ac3c40d36", size = 1994679, upload-time = "2025-11-04T13:40:50.619Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9e/8b/341991b158ddab181cff136acd2552c9f35bd30380422a639c0671e99a91/pydantic_core-2.41.5-cp313-cp313-win_amd64.whl", hash = "sha256:79ec52ec461e99e13791ec6508c722742ad745571f234ea6255bed38c6480f11", size = 2019766, upload-time = "2025-11-04T13:40:52.631Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/73/7d/f2f9db34af103bea3e09735bb40b021788a5e834c81eedb541991badf8f5/pydantic_core-2.41.5-cp313-cp313-win_arm64.whl", hash = "sha256:3f84d5c1b4ab906093bdc1ff10484838aca54ef08de4afa9de0f5f14d69639cd", size = 1981005, upload-time = "2025-11-04T13:40:54.734Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/28/46b7c5c9635ae96ea0fbb779e271a38129df2550f763937659ee6c5dbc65/pydantic_core-2.41.5-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:3f37a19d7ebcdd20b96485056ba9e8b304e27d9904d233d7b1015db320e51f0a", size = 2119622, upload-time = "2025-11-04T13:40:56.68Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/1a/145646e5687e8d9a1e8d09acb278c8535ebe9e972e1f162ed338a622f193/pydantic_core-2.41.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1d1d9764366c73f996edd17abb6d9d7649a7eb690006ab6adbda117717099b14", size = 1891725, upload-time = "2025-11-04T13:40:58.807Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/23/04/e89c29e267b8060b40dca97bfc64a19b2a3cf99018167ea1677d96368273/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25e1c2af0fce638d5f1988b686f3b3ea8cd7de5f244ca147c777769e798a9cd1", size = 1915040, upload-time = "2025-11-04T13:41:00.853Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/84/a3/15a82ac7bd97992a82257f777b3583d3e84bdb06ba6858f745daa2ec8a85/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:506d766a8727beef16b7adaeb8ee6217c64fc813646b424d0804d67c16eddb66", size = 2063691, upload-time = "2025-11-04T13:41:03.504Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/9b/0046701313c6ef08c0c1cf0e028c67c770a4e1275ca73131563c5f2a310a/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4819fa52133c9aa3c387b3328f25c1facc356491e6135b459f1de698ff64d869", size = 2213897, upload-time = "2025-11-04T13:41:05.804Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/cd/6bac76ecd1b27e75a95ca3a9a559c643b3afcd2dd62086d4b7a32a18b169/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2b761d210c9ea91feda40d25b4efe82a1707da2ef62901466a42492c028553a2", size = 2333302, upload-time = "2025-11-04T13:41:07.809Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4c/d2/ef2074dc020dd6e109611a8be4449b98cd25e1b9b8a303c2f0fca2f2bcf7/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:22f0fb8c1c583a3b6f24df2470833b40207e907b90c928cc8d3594b76f874375", size = 2064877, upload-time = "2025-11-04T13:41:09.827Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/18/66/e9db17a9a763d72f03de903883c057b2592c09509ccfe468187f2a2eef29/pydantic_core-2.41.5-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2782c870e99878c634505236d81e5443092fba820f0373997ff75f90f68cd553", size = 2180680, upload-time = "2025-11-04T13:41:12.379Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/9e/3ce66cebb929f3ced22be85d4c2399b8e85b622db77dad36b73c5387f8f8/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:0177272f88ab8312479336e1d777f6b124537d47f2123f89cb37e0accea97f90", size = 2138960, upload-time = "2025-11-04T13:41:14.627Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/62/205a998f4327d2079326b01abee48e502ea739d174f0a89295c481a2272e/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:63510af5e38f8955b8ee5687740d6ebf7c2a0886d15a6d65c32814613681bc07", size = 2339102, upload-time = "2025-11-04T13:41:16.868Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/0d/f05e79471e889d74d3d88f5bd20d0ed189ad94c2423d81ff8d0000aab4ff/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:e56ba91f47764cc14f1daacd723e3e82d1a89d783f0f5afe9c364b8bb491ccdb", size = 2326039, upload-time = "2025-11-04T13:41:18.934Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/e1/e08a6208bb100da7e0c4b288eed624a703f4d129bde2da475721a80cab32/pydantic_core-2.41.5-cp314-cp314-win32.whl", hash = "sha256:aec5cf2fd867b4ff45b9959f8b20ea3993fc93e63c7363fe6851424c8a7e7c23", size = 1995126, upload-time = "2025-11-04T13:41:21.418Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/5d/56ba7b24e9557f99c9237e29f5c09913c81eeb2f3217e40e922353668092/pydantic_core-2.41.5-cp314-cp314-win_amd64.whl", hash = "sha256:8e7c86f27c585ef37c35e56a96363ab8de4e549a95512445b85c96d3e2f7c1bf", size = 2015489, upload-time = "2025-11-04T13:41:24.076Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/bb/f7a190991ec9e3e0ba22e4993d8755bbc4a32925c0b5b42775c03e8148f9/pydantic_core-2.41.5-cp314-cp314-win_arm64.whl", hash = "sha256:e672ba74fbc2dc8eea59fb6d4aed6845e6905fc2a8afe93175d94a83ba2a01a0", size = 1977288, upload-time = "2025-11-04T13:41:26.33Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/ed/77542d0c51538e32e15afe7899d79efce4b81eee631d99850edc2f5e9349/pydantic_core-2.41.5-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:8566def80554c3faa0e65ac30ab0932b9e3a5cd7f8323764303d468e5c37595a", size = 2120255, upload-time = "2025-11-04T13:41:28.569Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/3d/6913dde84d5be21e284439676168b28d8bbba5600d838b9dca99de0fad71/pydantic_core-2.41.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:b80aa5095cd3109962a298ce14110ae16b8c1aece8b72f9dafe81cf597ad80b3", size = 1863760, upload-time = "2025-11-04T13:41:31.055Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/f0/e5e6b99d4191da102f2b0eb9687aaa7f5bea5d9964071a84effc3e40f997/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3006c3dd9ba34b0c094c544c6006cc79e87d8612999f1a5d43b769b89181f23c", size = 1878092, upload-time = "2025-11-04T13:41:33.21Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/48/36fb760642d568925953bcc8116455513d6e34c4beaa37544118c36aba6d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:72f6c8b11857a856bcfa48c86f5368439f74453563f951e473514579d44aa612", size = 2053385, upload-time = "2025-11-04T13:41:35.508Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/20/25/92dc684dd8eb75a234bc1c764b4210cf2646479d54b47bf46061657292a8/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5cb1b2f9742240e4bb26b652a5aeb840aa4b417c7748b6f8387927bc6e45e40d", size = 2218832, upload-time = "2025-11-04T13:41:37.732Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/09/f53e0b05023d3e30357d82eb35835d0f6340ca344720a4599cd663dca599/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bd3d54f38609ff308209bd43acea66061494157703364ae40c951f83ba99a1a9", size = 2327585, upload-time = "2025-11-04T13:41:40Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/4e/2ae1aa85d6af35a39b236b1b1641de73f5a6ac4d5a7509f77b814885760c/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ff4321e56e879ee8d2a879501c8e469414d948f4aba74a2d4593184eb326660", size = 2041078, upload-time = "2025-11-04T13:41:42.323Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cd/13/2e215f17f0ef326fc72afe94776edb77525142c693767fc347ed6288728d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d0d2568a8c11bf8225044aa94409e21da0cb09dcdafe9ecd10250b2baad531a9", size = 2173914, upload-time = "2025-11-04T13:41:45.221Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/7a/f999a6dcbcd0e5660bc348a3991c8915ce6599f4f2c6ac22f01d7a10816c/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:a39455728aabd58ceabb03c90e12f71fd30fa69615760a075b9fec596456ccc3", size = 2129560, upload-time = "2025-11-04T13:41:47.474Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3a/b1/6c990ac65e3b4c079a4fb9f5b05f5b013afa0f4ed6780a3dd236d2cbdc64/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_armv7l.whl", hash = "sha256:239edca560d05757817c13dc17c50766136d21f7cd0fac50295499ae24f90fdf", size = 2329244, upload-time = "2025-11-04T13:41:49.992Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d9/02/3c562f3a51afd4d88fff8dffb1771b30cfdfd79befd9883ee094f5b6c0d8/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:2a5e06546e19f24c6a96a129142a75cee553cc018ffee48a460059b1185f4470", size = 2331955, upload-time = "2025-11-04T13:41:54.079Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/96/5fb7d8c3c17bc8c62fdb031c47d77a1af698f1d7a406b0f79aaa1338f9ad/pydantic_core-2.41.5-cp314-cp314t-win32.whl", hash = "sha256:b4ececa40ac28afa90871c2cc2b9ffd2ff0bf749380fbdf57d165fd23da353aa", size = 1988906, upload-time = "2025-11-04T13:41:56.606Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/ed/182129d83032702912c2e2d8bbe33c036f342cc735737064668585dac28f/pydantic_core-2.41.5-cp314-cp314t-win_amd64.whl", hash = "sha256:80aa89cad80b32a912a65332f64a4450ed00966111b6615ca6816153d3585a8c", size = 1981607, upload-time = "2025-11-04T13:41:58.889Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/ed/068e41660b832bb0b1aa5b58011dea2a3fe0ba7861ff38c4d4904c1c1a99/pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008", size = 1974769, upload-time = "2025-11-04T13:42:01.186Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/32/59b0c7e63e277fa7911c2fc70ccfb45ce4b98991e7ef37110663437005af/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:7da7087d756b19037bc2c06edc6c170eeef3c3bafcb8f532ff17d64dc427adfd", size = 2110495, upload-time = "2025-11-04T13:42:49.689Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/81/05e400037eaf55ad400bcd318c05bb345b57e708887f07ddb2d20e3f0e98/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:aabf5777b5c8ca26f7824cb4a120a740c9588ed58df9b2d196ce92fba42ff8dc", size = 1915388, upload-time = "2025-11-04T13:42:52.215Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/0d/e3549b2399f71d56476b77dbf3cf8937cec5cd70536bdc0e374a421d0599/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c007fe8a43d43b3969e8469004e9845944f1a80e6acd47c150856bb87f230c56", size = 1942879, upload-time = "2025-11-04T13:42:56.483Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f7/07/34573da085946b6a313d7c42f82f16e8920bfd730665de2d11c0c37a74b5/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:76d0819de158cd855d1cbb8fcafdf6f5cf1eb8e470abe056d5d161106e38062b", size = 2139017, upload-time = "2025-11-04T13:42:59.471Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pygments"
|
||||
version = "2.19.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyright"
|
||||
version = "1.1.408"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "nodeenv" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/74/b2/5db700e52554b8f025faa9c3c624c59f1f6c8841ba81ab97641b54322f16/pyright-1.1.408.tar.gz", hash = "sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684", size = 4400578, upload-time = "2026-01-08T08:07:38.795Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/82/a2c93e32800940d9573fb28c346772a14778b84ba7524e691b324620ab89/pyright-1.1.408-py3-none-any.whl", hash = "sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1", size = 6399144, upload-time = "2026-01-08T08:07:37.082Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest"
|
||||
version = "9.0.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||
{ name = "iniconfig" },
|
||||
{ name = "packaging" },
|
||||
{ name = "pluggy" },
|
||||
{ name = "pygments" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "python-dateutil"
|
||||
version = "2.9.0.post0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "six" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/66/c0/0c8b6ad9f17a802ee498c46e004a0eb49bc148f2fd230864601a86dcf6db/python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3", size = 342432, upload-time = "2024-03-01T18:36:20.211Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892, upload-time = "2024-03-01T18:36:18.57Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "python-dotenv"
|
||||
version = "1.2.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f0/26/19cadc79a718c5edbec86fd4919a6b6d3f681039a2f6d66d14be94e75fb9/python_dotenv-1.2.1.tar.gz", hash = "sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6", size = 44221, upload-time = "2025-10-26T15:12:10.434Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/14/1b/a298b06749107c305e1fe0f814c6c74aea7b2f1e10989cb30f544a1b3253/python_dotenv-1.2.1-py3-none-any.whl", hash = "sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61", size = 21230, upload-time = "2025-10-26T15:12:09.109Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyyaml"
|
||||
version = "6.0.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/33/422b98d2195232ca1826284a76852ad5a86fe23e31b009c9886b2d0fb8b2/pyyaml-6.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196", size = 182063, upload-time = "2025-09-25T21:32:11.445Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/a0/6cf41a19a1f2f3feab0e9c0b74134aa2ce6849093d5517a0c550fe37a648/pyyaml-6.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0", size = 173973, upload-time = "2025-09-25T21:32:12.492Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ed/23/7a778b6bd0b9a8039df8b1b1d80e2e2ad78aa04171592c8a5c43a56a6af4/pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28", size = 775116, upload-time = "2025-09-25T21:32:13.652Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/65/30/d7353c338e12baef4ecc1b09e877c1970bd3382789c159b4f89d6a70dc09/pyyaml-6.0.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c", size = 844011, upload-time = "2025-09-25T21:32:15.21Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/9d/b3589d3877982d4f2329302ef98a8026e7f4443c765c46cfecc8858c6b4b/pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc", size = 807870, upload-time = "2025-09-25T21:32:16.431Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/c0/b3be26a015601b822b97d9149ff8cb5ead58c66f981e04fedf4e762f4bd4/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e", size = 761089, upload-time = "2025-09-25T21:32:17.56Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/be/8e/98435a21d1d4b46590d5459a22d88128103f8da4c2d4cb8f14f2a96504e1/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea", size = 790181, upload-time = "2025-09-25T21:32:18.834Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/93/7baea19427dcfbe1e5a372d81473250b379f04b1bd3c4c5ff825e2327202/pyyaml-6.0.3-cp312-cp312-win32.whl", hash = "sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5", size = 137658, upload-time = "2025-09-25T21:32:20.209Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/bf/899e81e4cce32febab4fb42bb97dcdf66bc135272882d1987881a4b519e9/pyyaml-6.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b", size = 154003, upload-time = "2025-09-25T21:32:21.167Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/08/67bd04656199bbb51dbed1439b7f27601dfb576fb864099c7ef0c3e55531/pyyaml-6.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd", size = 140344, upload-time = "2025-09-25T21:32:22.617Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.15.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/da/31/d6e536cdebb6568ae75a7f00e4b4819ae0ad2640c3604c305a0428680b0c/ruff-0.15.4.tar.gz", hash = "sha256:3412195319e42d634470cc97aa9803d07e9d5c9223b99bcb1518f0c725f26ae1", size = 4569550, upload-time = "2026-02-26T20:04:14.959Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/82/c11a03cfec3a4d26a0ea1e571f0f44be5993b923f905eeddfc397c13d360/ruff-0.15.4-py3-none-linux_armv6l.whl", hash = "sha256:a1810931c41606c686bae8b5b9a8072adac2f611bb433c0ba476acba17a332e0", size = 10453333, upload-time = "2026-02-26T20:04:20.093Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/5d/6a1f271f6e31dffb31855996493641edc3eef8077b883eaf007a2f1c2976/ruff-0.15.4-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:5a1632c66672b8b4d3e1d1782859e98d6e0b4e70829530666644286600a33992", size = 10853356, upload-time = "2026-02-26T20:04:05.808Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/d8/0fab9f8842b83b1a9c2bf81b85063f65e93fb512e60effa95b0be49bfc54/ruff-0.15.4-py3-none-macosx_11_0_arm64.whl", hash = "sha256:a4386ba2cd6c0f4ff75252845906acc7c7c8e1ac567b7bc3d373686ac8c222ba", size = 10187434, upload-time = "2026-02-26T20:03:54.656Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/85/cc/cc220fd9394eff5db8d94dec199eec56dd6c9f3651d8869d024867a91030/ruff-0.15.4-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b2496488bdfd3732747558b6f95ae427ff066d1fcd054daf75f5a50674411e75", size = 10535456, upload-time = "2026-02-26T20:03:52.738Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fa/0f/bced38fa5cf24373ec767713c8e4cadc90247f3863605fb030e597878661/ruff-0.15.4-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3f1c4893841ff2d54cbda1b2860fa3260173df5ddd7b95d370186f8a5e66a4ac", size = 10287772, upload-time = "2026-02-26T20:04:08.138Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/90/58a1802d84fed15f8f281925b21ab3cecd813bde52a8ca033a4de8ab0e7a/ruff-0.15.4-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:820b8766bd65503b6c30aaa6331e8ef3a6e564f7999c844e9a547c40179e440a", size = 11049051, upload-time = "2026-02-26T20:04:03.53Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/ac/b7ad36703c35f3866584564dc15f12f91cb1a26a897dc2fd13d7cb3ae1af/ruff-0.15.4-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c9fb74bab47139c1751f900f857fa503987253c3ef89129b24ed375e72873e85", size = 11890494, upload-time = "2026-02-26T20:04:10.497Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/3d/3eb2f47a39a8b0da99faf9c54d3eb24720add1e886a5309d4d1be73a6380/ruff-0.15.4-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f80c98765949c518142b3a50a5db89343aa90f2c2bf7799de9986498ae6176db", size = 11326221, upload-time = "2026-02-26T20:04:12.84Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/90/bf134f4c1e5243e62690e09d63c55df948a74084c8ac3e48a88468314da6/ruff-0.15.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:451a2e224151729b3b6c9ffb36aed9091b2996fe4bdbd11f47e27d8f2e8888ec", size = 11168459, upload-time = "2026-02-26T20:04:00.969Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/e5/a64d27688789b06b5d55162aafc32059bb8c989c61a5139a36e1368285eb/ruff-0.15.4-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:a8f157f2e583c513c4f5f896163a93198297371f34c04220daf40d133fdd4f7f", size = 11104366, upload-time = "2026-02-26T20:03:48.099Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/f6/32d1dcb66a2559763fc3027bdd65836cad9eb09d90f2ed6a63d8e9252b02/ruff-0.15.4-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:917cc68503357021f541e69b35361c99387cdbbf99bd0ea4aa6f28ca99ff5338", size = 10510887, upload-time = "2026-02-26T20:03:45.771Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/92/22d1ced50971c5b6433aed166fcef8c9343f567a94cf2b9d9089f6aa80fe/ruff-0.15.4-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:e9737c8161da79fd7cfec19f1e35620375bd8b2a50c3e77fa3d2c16f574105cc", size = 10285939, upload-time = "2026-02-26T20:04:22.42Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e6/f4/7c20aec3143837641a02509a4668fb146a642fd1211846634edc17eb5563/ruff-0.15.4-py3-none-musllinux_1_2_i686.whl", hash = "sha256:291258c917539e18f6ba40482fe31d6f5ac023994ee11d7bdafd716f2aab8a68", size = 10765471, upload-time = "2026-02-26T20:03:58.924Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d0/09/6d2f7586f09a16120aebdff8f64d962d7c4348313c77ebb29c566cefc357/ruff-0.15.4-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:3f83c45911da6f2cd5936c436cf86b9f09f09165f033a99dcf7477e34041cbc3", size = 11263382, upload-time = "2026-02-26T20:04:24.424Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1b/fa/2ef715a1cd329ef47c1a050e10dee91a9054b7ce2fcfdd6a06d139afb7ec/ruff-0.15.4-py3-none-win32.whl", hash = "sha256:65594a2d557d4ee9f02834fcdf0a28daa8b3b9f6cb2cb93846025a36db47ef22", size = 10506664, upload-time = "2026-02-26T20:03:50.56Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d0/a8/c688ef7e29983976820d18710f955751d9f4d4eb69df658af3d006e2ba3e/ruff-0.15.4-py3-none-win_amd64.whl", hash = "sha256:04196ad44f0df220c2ece5b0e959c2f37c777375ec744397d21d15b50a75264f", size = 11651048, upload-time = "2026-02-26T20:04:17.191Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3e/0a/9e1be9035b37448ce2e68c978f0591da94389ade5a5abafa4cf99985d1b2/ruff-0.15.4-py3-none-win_arm64.whl", hash = "sha256:60d5177e8cfc70e51b9c5fad936c634872a74209f934c1e79107d11787ad5453", size = 10966776, upload-time = "2026-02-26T20:03:56.908Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "s3transfer"
|
||||
version = "0.16.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "botocore" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/05/04/74127fc843314818edfa81b5540e26dd537353b123a4edc563109d8f17dd/s3transfer-0.16.0.tar.gz", hash = "sha256:8e990f13268025792229cd52fa10cb7163744bf56e719e0b9cb925ab79abf920", size = 153827, upload-time = "2025-12-01T02:30:59.114Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/51/727abb13f44c1fcf6d145979e1535a35794db0f6e450a0cb46aa24732fe2/s3transfer-0.16.0-py3-none-any.whl", hash = "sha256:18e25d66fed509e3868dc1572b3f427ff947dd2c56f844a5bf09481ad3f3b2fe", size = 86830, upload-time = "2025-12-01T02:30:57.729Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "six"
|
||||
version = "1.17.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031, upload-time = "2024-12-04T17:35:28.174Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050, upload-time = "2024-12-04T17:35:26.475Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "starlette"
|
||||
version = "0.52.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c4/68/79977123bb7be889ad680d79a40f339082c1978b5cfcf62c2d8d196873ac/starlette-0.52.1.tar.gz", hash = "sha256:834edd1b0a23167694292e94f597773bc3f89f362be6effee198165a35d62933", size = 2653702, upload-time = "2026-01-18T13:34:11.062Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/81/0d/13d1d239a25cbfb19e740db83143e95c772a1fe10202dda4b76792b114dd/starlette-0.52.1-py3-none-any.whl", hash = "sha256:0029d43eb3d273bc4f83a08720b4912ea4b071087a3b48db01b7c839f7954d74", size = 74272, upload-time = "2026-01-18T13:34:09.188Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "4.15.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "typing-inspection"
|
||||
version = "0.4.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/55/e3/70399cb7dd41c10ac53367ae42139cf4b1ca5f36bb3dc6c9d33acdb43655/typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464", size = 75949, upload-time = "2025-10-01T02:14:41.687Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/9b/47798a6c91d8bdb567fe2698fe81e0c6b7cb7ef4d13da4114b41d239f65d/typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7", size = 14611, upload-time = "2025-10-01T02:14:40.154Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "urllib3"
|
||||
version = "2.6.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c7/24/5f1b3bdffd70275f6661c76461e25f024d5a38a46f04aaca912426a2b1d3/urllib3-2.6.3.tar.gz", hash = "sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed", size = 435556, upload-time = "2026-01-07T16:24:43.925Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/39/08/aaaad47bc4e9dc8c725e68f9d04865dbcb2052843ff09c97b08904852d84/urllib3-2.6.3-py3-none-any.whl", hash = "sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4", size = 131584, upload-time = "2026-01-07T16:24:42.685Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "uvicorn"
|
||||
version = "0.41.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "click" },
|
||||
{ name = "h11" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/32/ce/eeb58ae4ac36fe09e3842eb02e0eb676bf2c53ae062b98f1b2531673efdd/uvicorn-0.41.0.tar.gz", hash = "sha256:09d11cf7008da33113824ee5a1c6422d89fbc2ff476540d69a34c87fab8b571a", size = 82633, upload-time = "2026-02-16T23:07:24.1Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/83/e4/d04a086285c20886c0daad0e026f250869201013d18f81d9ff5eada73a88/uvicorn-0.41.0-py3-none-any.whl", hash = "sha256:29e35b1d2c36a04b9e180d4007ede3bcb32a85fbdfd6c6aeb3f26839de088187", size = 68783, upload-time = "2026-02-16T23:07:22.357Z" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
standard = [
|
||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||
{ name = "httptools" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "uvloop", marker = "platform_python_implementation != 'PyPy' and sys_platform != 'cygwin' and sys_platform != 'win32'" },
|
||||
{ name = "watchfiles" },
|
||||
{ name = "websockets" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "uvloop"
|
||||
version = "0.22.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/06/f0/18d39dbd1971d6d62c4629cc7fa67f74821b0dc1f5a77af43719de7936a7/uvloop-0.22.1.tar.gz", hash = "sha256:6c84bae345b9147082b17371e3dd5d42775bddce91f885499017f4607fdaf39f", size = 2443250, upload-time = "2025-10-16T22:17:19.342Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/ff/7f72e8170be527b4977b033239a83a68d5c881cc4775fca255c677f7ac5d/uvloop-0.22.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:fe94b4564e865d968414598eea1a6de60adba0c040ba4ed05ac1300de402cd42", size = 1359936, upload-time = "2025-10-16T22:16:29.436Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/c6/e5d433f88fd54d81ef4be58b2b7b0cea13c442454a1db703a1eea0db1a59/uvloop-0.22.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:51eb9bd88391483410daad430813d982010f9c9c89512321f5b60e2cddbdddd6", size = 752769, upload-time = "2025-10-16T22:16:30.493Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/24/68/a6ac446820273e71aa762fa21cdcc09861edd3536ff47c5cd3b7afb10eeb/uvloop-0.22.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:700e674a166ca5778255e0e1dc4e9d79ab2acc57b9171b79e65feba7184b3370", size = 4317413, upload-time = "2025-10-16T22:16:31.644Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/6f/e62b4dfc7ad6518e7eff2516f680d02a0f6eb62c0c212e152ca708a0085e/uvloop-0.22.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7b5b1ac819a3f946d3b2ee07f09149578ae76066d70b44df3fa990add49a82e4", size = 4426307, upload-time = "2025-10-16T22:16:32.917Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/60/97362554ac21e20e81bcef1150cb2a7e4ffdaf8ea1e5b2e8bf7a053caa18/uvloop-0.22.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e047cc068570bac9866237739607d1313b9253c3051ad84738cbb095be0537b2", size = 4131970, upload-time = "2025-10-16T22:16:34.015Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/39/6b3f7d234ba3964c428a6e40006340f53ba37993f46ed6e111c6e9141d18/uvloop-0.22.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:512fec6815e2dd45161054592441ef76c830eddaad55c8aa30952e6fe1ed07c0", size = 4296343, upload-time = "2025-10-16T22:16:35.149Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/8c/182a2a593195bfd39842ea68ebc084e20c850806117213f5a299dfc513d9/uvloop-0.22.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:561577354eb94200d75aca23fbde86ee11be36b00e52a4eaf8f50fb0c86b7705", size = 1358611, upload-time = "2025-10-16T22:16:36.833Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/14/e301ee96a6dc95224b6f1162cd3312f6d1217be3907b79173b06785f2fe7/uvloop-0.22.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:1cdf5192ab3e674ca26da2eada35b288d2fa49fdd0f357a19f0e7c4e7d5077c8", size = 751811, upload-time = "2025-10-16T22:16:38.275Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/02/654426ce265ac19e2980bfd9ea6590ca96a56f10c76e63801a2df01c0486/uvloop-0.22.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6e2ea3d6190a2968f4a14a23019d3b16870dd2190cd69c8180f7c632d21de68d", size = 4288562, upload-time = "2025-10-16T22:16:39.375Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/c0/0be24758891ef825f2065cd5db8741aaddabe3e248ee6acc5e8a80f04005/uvloop-0.22.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0530a5fbad9c9e4ee3f2b33b148c6a64d47bbad8000ea63704fa8260f4cf728e", size = 4366890, upload-time = "2025-10-16T22:16:40.547Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/53/8369e5219a5855869bcee5f4d317f6da0e2c669aecf0ef7d371e3d084449/uvloop-0.22.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:bc5ef13bbc10b5335792360623cc378d52d7e62c2de64660616478c32cd0598e", size = 4119472, upload-time = "2025-10-16T22:16:41.694Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/ba/d69adbe699b768f6b29a5eec7b47dd610bd17a69de51b251126a801369ea/uvloop-0.22.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:1f38ec5e3f18c8a10ded09742f7fb8de0108796eb673f30ce7762ce1b8550cad", size = 4239051, upload-time = "2025-10-16T22:16:43.224Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/cd/b62bdeaa429758aee8de8b00ac0dd26593a9de93d302bff3d21439e9791d/uvloop-0.22.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:3879b88423ec7e97cd4eba2a443aa26ed4e59b45e6b76aabf13fe2f27023a142", size = 1362067, upload-time = "2025-10-16T22:16:44.503Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/f8/a132124dfda0777e489ca86732e85e69afcd1ff7686647000050ba670689/uvloop-0.22.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:4baa86acedf1d62115c1dc6ad1e17134476688f08c6efd8a2ab076e815665c74", size = 752423, upload-time = "2025-10-16T22:16:45.968Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/94/94af78c156f88da4b3a733773ad5ba0b164393e357cc4bd0ab2e2677a7d6/uvloop-0.22.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:297c27d8003520596236bdb2335e6b3f649480bd09e00d1e3a99144b691d2a35", size = 4272437, upload-time = "2025-10-16T22:16:47.451Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/35/60249e9fd07b32c665192cec7af29e06c7cd96fa1d08b84f012a56a0b38e/uvloop-0.22.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c1955d5a1dd43198244d47664a5858082a3239766a839b2102a269aaff7a4e25", size = 4292101, upload-time = "2025-10-16T22:16:49.318Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/62/67d382dfcb25d0a98ce73c11ed1a6fba5037a1a1d533dcbb7cab033a2636/uvloop-0.22.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:b31dc2fccbd42adc73bc4e7cdbae4fc5086cf378979e53ca5d0301838c5682c6", size = 4114158, upload-time = "2025-10-16T22:16:50.517Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/7a/f1171b4a882a5d13c8b7576f348acfe6074d72eaf52cccef752f748d4a9f/uvloop-0.22.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:93f617675b2d03af4e72a5333ef89450dfaa5321303ede6e67ba9c9d26878079", size = 4177360, upload-time = "2025-10-16T22:16:52.646Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/7b/b01414f31546caf0919da80ad57cbfe24c56b151d12af68cee1b04922ca8/uvloop-0.22.1-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:37554f70528f60cad66945b885eb01f1bb514f132d92b6eeed1c90fd54ed6289", size = 1454790, upload-time = "2025-10-16T22:16:54.355Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/31/0bb232318dd838cad3fa8fb0c68c8b40e1145b32025581975e18b11fab40/uvloop-0.22.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:b76324e2dc033a0b2f435f33eb88ff9913c156ef78e153fb210e03c13da746b3", size = 796783, upload-time = "2025-10-16T22:16:55.906Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/42/38/c9b09f3271a7a723a5de69f8e237ab8e7803183131bc57c890db0b6bb872/uvloop-0.22.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:badb4d8e58ee08dad957002027830d5c3b06aea446a6a3744483c2b3b745345c", size = 4647548, upload-time = "2025-10-16T22:16:57.008Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/37/945b4ca0ac27e3dc4952642d4c900edd030b3da6c9634875af6e13ae80e5/uvloop-0.22.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b91328c72635f6f9e0282e4a57da7470c7350ab1c9f48546c0f2866205349d21", size = 4467065, upload-time = "2025-10-16T22:16:58.206Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/cc/48d232f33d60e2e2e0b42f4e73455b146b76ebe216487e862700457fbf3c/uvloop-0.22.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:daf620c2995d193449393d6c62131b3fbd40a63bf7b307a1527856ace637fe88", size = 4328384, upload-time = "2025-10-16T22:16:59.36Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e4/16/c1fd27e9549f3c4baf1dc9c20c456cd2f822dbf8de9f463824b0c0357e06/uvloop-0.22.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6cde23eeda1a25c75b2e07d39970f3374105d5eafbaab2a4482be82f272d5a5e", size = 4296730, upload-time = "2025-10-16T22:17:00.744Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "watchfiles"
|
||||
version = "1.1.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c2/c9/8869df9b2a2d6c59d79220a4db37679e74f807c559ffe5265e08b227a210/watchfiles-1.1.1.tar.gz", hash = "sha256:a173cb5c16c4f40ab19cecf48a534c409f7ea983ab8fed0741304a1c0a31b3f2", size = 94440, upload-time = "2025-10-14T15:06:21.08Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/74/d5/f039e7e3c639d9b1d09b07ea412a6806d38123f0508e5f9b48a87b0a76cc/watchfiles-1.1.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:8c89f9f2f740a6b7dcc753140dd5e1ab9215966f7a3530d0c0705c83b401bd7d", size = 404745, upload-time = "2025-10-14T15:04:46.731Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/96/a881a13aa1349827490dab2d363c8039527060cfcc2c92cc6d13d1b1049e/watchfiles-1.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:bd404be08018c37350f0d6e34676bd1e2889990117a2b90070b3007f172d0610", size = 391769, upload-time = "2025-10-14T15:04:48.003Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/5b/d3b460364aeb8da471c1989238ea0e56bec24b6042a68046adf3d9ddb01c/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8526e8f916bb5b9a0a777c8317c23ce65de259422bba5b31325a6fa6029d33af", size = 449374, upload-time = "2025-10-14T15:04:49.179Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b9/44/5769cb62d4ed055cb17417c0a109a92f007114a4e07f30812a73a4efdb11/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2edc3553362b1c38d9f06242416a5d8e9fe235c204a4072e988ce2e5bb1f69f6", size = 459485, upload-time = "2025-10-14T15:04:50.155Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/0c/286b6301ded2eccd4ffd0041a1b726afda999926cf720aab63adb68a1e36/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:30f7da3fb3f2844259cba4720c3fc7138eb0f7b659c38f3bfa65084c7fc7abce", size = 488813, upload-time = "2025-10-14T15:04:51.059Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/2b/8530ed41112dd4a22f4dcfdb5ccf6a1baad1ff6eed8dc5a5f09e7e8c41c7/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f8979280bdafff686ba5e4d8f97840f929a87ed9cdf133cbbd42f7766774d2aa", size = 594816, upload-time = "2025-10-14T15:04:52.031Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/d2/f5f9fb49489f184f18470d4f99f4e862a4b3e9ac2865688eb2099e3d837a/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dcc5c24523771db3a294c77d94771abcfcb82a0e0ee8efd910c37c59ec1b31bb", size = 475186, upload-time = "2025-10-14T15:04:53.064Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cf/68/5707da262a119fb06fbe214d82dd1fe4a6f4af32d2d14de368d0349eb52a/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1db5d7ae38ff20153d542460752ff397fcf5c96090c1230803713cf3147a6803", size = 456812, upload-time = "2025-10-14T15:04:55.174Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/ab/3cbb8756323e8f9b6f9acb9ef4ec26d42b2109bce830cc1f3468df20511d/watchfiles-1.1.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:28475ddbde92df1874b6c5c8aaeb24ad5be47a11f87cde5a28ef3835932e3e94", size = 630196, upload-time = "2025-10-14T15:04:56.22Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/78/46/7152ec29b8335f80167928944a94955015a345440f524d2dfe63fc2f437b/watchfiles-1.1.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:36193ed342f5b9842edd3532729a2ad55c4160ffcfa3700e0d54be496b70dd43", size = 622657, upload-time = "2025-10-14T15:04:57.521Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/bf/95895e78dd75efe9a7f31733607f384b42eb5feb54bd2eb6ed57cc2e94f4/watchfiles-1.1.1-cp312-cp312-win32.whl", hash = "sha256:859e43a1951717cc8de7f4c77674a6d389b106361585951d9e69572823f311d9", size = 272042, upload-time = "2025-10-14T15:04:59.046Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/87/0a/90eb755f568de2688cb220171c4191df932232c20946966c27a59c400850/watchfiles-1.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:91d4c9a823a8c987cce8fa2690923b069966dabb196dd8d137ea2cede885fde9", size = 288410, upload-time = "2025-10-14T15:05:00.081Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/36/76/f322701530586922fbd6723c4f91ace21364924822a8772c549483abed13/watchfiles-1.1.1-cp312-cp312-win_arm64.whl", hash = "sha256:a625815d4a2bdca61953dbba5a39d60164451ef34c88d751f6c368c3ea73d404", size = 278209, upload-time = "2025-10-14T15:05:01.168Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/f4/f750b29225fe77139f7ae5de89d4949f5a99f934c65a1f1c0b248f26f747/watchfiles-1.1.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:130e4876309e8686a5e37dba7d5e9bc77e6ed908266996ca26572437a5271e18", size = 404321, upload-time = "2025-10-14T15:05:02.063Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/f9/f07a295cde762644aa4c4bb0f88921d2d141af45e735b965fb2e87858328/watchfiles-1.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:5f3bde70f157f84ece3765b42b4a52c6ac1a50334903c6eaf765362f6ccca88a", size = 391783, upload-time = "2025-10-14T15:05:03.052Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bc/11/fc2502457e0bea39a5c958d86d2cb69e407a4d00b85735ca724bfa6e0d1a/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14e0b1fe858430fc0251737ef3824c54027bedb8c37c38114488b8e131cf8219", size = 449279, upload-time = "2025-10-14T15:05:04.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/1f/d66bc15ea0b728df3ed96a539c777acfcad0eb78555ad9efcaa1274688f0/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f27db948078f3823a6bb3b465180db8ebecf26dd5dae6f6180bd87383b6b4428", size = 459405, upload-time = "2025-10-14T15:05:04.942Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/be/90/9f4a65c0aec3ccf032703e6db02d89a157462fbb2cf20dd415128251cac0/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:059098c3a429f62fc98e8ec62b982230ef2c8df68c79e826e37b895bc359a9c0", size = 488976, upload-time = "2025-10-14T15:05:05.905Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/57/ee347af605d867f712be7029bb94c8c071732a4b44792e3176fa3c612d39/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bfb5862016acc9b869bb57284e6cb35fdf8e22fe59f7548858e2f971d045f150", size = 595506, upload-time = "2025-10-14T15:05:06.906Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/78/cc5ab0b86c122047f75e8fc471c67a04dee395daf847d3e59381996c8707/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:319b27255aacd9923b8a276bb14d21a5f7ff82564c744235fc5eae58d95422ae", size = 474936, upload-time = "2025-10-14T15:05:07.906Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/da/def65b170a3815af7bd40a3e7010bf6ab53089ef1b75d05dd5385b87cf08/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c755367e51db90e75b19454b680903631d41f9e3607fbd941d296a020c2d752d", size = 456147, upload-time = "2025-10-14T15:05:09.138Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/99/da6573ba71166e82d288d4df0839128004c67d2778d3b566c138695f5c0b/watchfiles-1.1.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c22c776292a23bfc7237a98f791b9ad3144b02116ff10d820829ce62dff46d0b", size = 630007, upload-time = "2025-10-14T15:05:10.117Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/51/7439c4dd39511368849eb1e53279cd3454b4a4dbace80bab88feeb83c6b5/watchfiles-1.1.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:3a476189be23c3686bc2f4321dd501cb329c0a0469e77b7b534ee10129ae6374", size = 622280, upload-time = "2025-10-14T15:05:11.146Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/9c/8ed97d4bba5db6fdcdb2b298d3898f2dd5c20f6b73aee04eabe56c59677e/watchfiles-1.1.1-cp313-cp313-win32.whl", hash = "sha256:bf0a91bfb5574a2f7fc223cf95eeea79abfefa404bf1ea5e339c0c1560ae99a0", size = 272056, upload-time = "2025-10-14T15:05:12.156Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1f/f3/c14e28429f744a260d8ceae18bf58c1d5fa56b50d006a7a9f80e1882cb0d/watchfiles-1.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:52e06553899e11e8074503c8e716d574adeeb7e68913115c4b3653c53f9bae42", size = 288162, upload-time = "2025-10-14T15:05:13.208Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/61/fe0e56c40d5cd29523e398d31153218718c5786b5e636d9ae8ae79453d27/watchfiles-1.1.1-cp313-cp313-win_arm64.whl", hash = "sha256:ac3cc5759570cd02662b15fbcd9d917f7ecd47efe0d6b40474eafd246f91ea18", size = 277909, upload-time = "2025-10-14T15:05:14.49Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/42/e0a7d749626f1e28c7108a99fb9bf524b501bbbeb9b261ceecde644d5a07/watchfiles-1.1.1-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:563b116874a9a7ce6f96f87cd0b94f7faf92d08d0021e837796f0a14318ef8da", size = 403389, upload-time = "2025-10-14T15:05:15.777Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/49/08732f90ce0fbbc13913f9f215c689cfc9ced345fb1bcd8829a50007cc8d/watchfiles-1.1.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:3ad9fe1dae4ab4212d8c91e80b832425e24f421703b5a42ef2e4a1e215aff051", size = 389964, upload-time = "2025-10-14T15:05:16.85Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/27/0d/7c315d4bd5f2538910491a0393c56bf70d333d51bc5b34bee8e68e8cea19/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce70f96a46b894b36eba678f153f052967a0d06d5b5a19b336ab0dbbd029f73e", size = 448114, upload-time = "2025-10-14T15:05:17.876Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/24/9e096de47a4d11bc4df41e9d1e61776393eac4cb6eb11b3e23315b78b2cc/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:cb467c999c2eff23a6417e58d75e5828716f42ed8289fe6b77a7e5a91036ca70", size = 460264, upload-time = "2025-10-14T15:05:18.962Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/0f/e8dea6375f1d3ba5fcb0b3583e2b493e77379834c74fd5a22d66d85d6540/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:836398932192dae4146c8f6f737d74baeac8b70ce14831a239bdb1ca882fc261", size = 487877, upload-time = "2025-10-14T15:05:20.094Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/5b/df24cfc6424a12deb41503b64d42fbea6b8cb357ec62ca84a5a3476f654a/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:743185e7372b7bc7c389e1badcc606931a827112fbbd37f14c537320fca08620", size = 595176, upload-time = "2025-10-14T15:05:21.134Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8f/b5/853b6757f7347de4e9b37e8cc3289283fb983cba1ab4d2d7144694871d9c/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:afaeff7696e0ad9f02cbb8f56365ff4686ab205fcf9c4c5b6fdfaaa16549dd04", size = 473577, upload-time = "2025-10-14T15:05:22.306Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e1/f7/0a4467be0a56e80447c8529c9fce5b38eab4f513cb3d9bf82e7392a5696b/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3f7eb7da0eb23aa2ba036d4f616d46906013a68caf61b7fdbe42fc8b25132e77", size = 455425, upload-time = "2025-10-14T15:05:23.348Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/e0/82583485ea00137ddf69bc84a2db88bd92ab4a6e3c405e5fb878ead8d0e7/watchfiles-1.1.1-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:831a62658609f0e5c64178211c942ace999517f5770fe9436be4c2faeba0c0ef", size = 628826, upload-time = "2025-10-14T15:05:24.398Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/28/9a/a785356fccf9fae84c0cc90570f11702ae9571036fb25932f1242c82191c/watchfiles-1.1.1-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:f9a2ae5c91cecc9edd47e041a930490c31c3afb1f5e6d71de3dc671bfaca02bf", size = 622208, upload-time = "2025-10-14T15:05:25.45Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/f4/0872229324ef69b2c3edec35e84bd57a1289e7d3fe74588048ed8947a323/watchfiles-1.1.1-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:d1715143123baeeaeadec0528bb7441103979a1d5f6fd0e1f915383fea7ea6d5", size = 404315, upload-time = "2025-10-14T15:05:26.501Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/22/16d5331eaed1cb107b873f6ae1b69e9ced582fcf0c59a50cd84f403b1c32/watchfiles-1.1.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:39574d6370c4579d7f5d0ad940ce5b20db0e4117444e39b6d8f99db5676c52fd", size = 390869, upload-time = "2025-10-14T15:05:27.649Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/7e/5643bfff5acb6539b18483128fdc0ef2cccc94a5b8fbda130c823e8ed636/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7365b92c2e69ee952902e8f70f3ba6360d0d596d9299d55d7d386df84b6941fb", size = 449919, upload-time = "2025-10-14T15:05:28.701Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/2e/c410993ba5025a9f9357c376f48976ef0e1b1aefb73b97a5ae01a5972755/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bfff9740c69c0e4ed32416f013f3c45e2ae42ccedd1167ef2d805c000b6c71a5", size = 460845, upload-time = "2025-10-14T15:05:30.064Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/a4/2df3b404469122e8680f0fcd06079317e48db58a2da2950fb45020947734/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b27cf2eb1dda37b2089e3907d8ea92922b673c0c427886d4edc6b94d8dfe5db3", size = 489027, upload-time = "2025-10-14T15:05:31.064Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/84/4587ba5b1f267167ee715b7f66e6382cca6938e0a4b870adad93e44747e6/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:526e86aced14a65a5b0ec50827c745597c782ff46b571dbfe46192ab9e0b3c33", size = 595615, upload-time = "2025-10-14T15:05:32.074Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6a/0f/c6988c91d06e93cd0bb3d4a808bcf32375ca1904609835c3031799e3ecae/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:04e78dd0b6352db95507fd8cb46f39d185cf8c74e4cf1e4fbad1d3df96faf510", size = 474836, upload-time = "2025-10-14T15:05:33.209Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/36/ded8aebea91919485b7bbabbd14f5f359326cb5ec218cd67074d1e426d74/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c85794a4cfa094714fb9c08d4a218375b2b95b8ed1666e8677c349906246c05", size = 455099, upload-time = "2025-10-14T15:05:34.189Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/e0/8c9bdba88af756a2fce230dd365fab2baf927ba42cd47521ee7498fd5211/watchfiles-1.1.1-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:74d5012b7630714b66be7b7b7a78855ef7ad58e8650c73afc4c076a1f480a8d6", size = 630626, upload-time = "2025-10-14T15:05:35.216Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/84/a95db05354bf2d19e438520d92a8ca475e578c647f78f53197f5a2f17aaf/watchfiles-1.1.1-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:8fbe85cb3201c7d380d3d0b90e63d520f15d6afe217165d7f98c9c649654db81", size = 622519, upload-time = "2025-10-14T15:05:36.259Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1d/ce/d8acdc8de545de995c339be67711e474c77d643555a9bb74a9334252bd55/watchfiles-1.1.1-cp314-cp314-win32.whl", hash = "sha256:3fa0b59c92278b5a7800d3ee7733da9d096d4aabcfabb9a928918bd276ef9b9b", size = 272078, upload-time = "2025-10-14T15:05:37.63Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/c9/a74487f72d0451524be827e8edec251da0cc1fcf111646a511ae752e1a3d/watchfiles-1.1.1-cp314-cp314-win_amd64.whl", hash = "sha256:c2047d0b6cea13b3316bdbafbfa0c4228ae593d995030fda39089d36e64fc03a", size = 287664, upload-time = "2025-10-14T15:05:38.95Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/b8/8ac000702cdd496cdce998c6f4ee0ca1f15977bba51bdf07d872ebdfc34c/watchfiles-1.1.1-cp314-cp314-win_arm64.whl", hash = "sha256:842178b126593addc05acf6fce960d28bc5fae7afbaa2c6c1b3a7b9460e5be02", size = 277154, upload-time = "2025-10-14T15:05:39.954Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/a8/e3af2184707c29f0f14b1963c0aace6529f9d1b8582d5b99f31bbf42f59e/watchfiles-1.1.1-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:88863fbbc1a7312972f1c511f202eb30866370ebb8493aef2812b9ff28156a21", size = 403820, upload-time = "2025-10-14T15:05:40.932Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c0/ec/e47e307c2f4bd75f9f9e8afbe3876679b18e1bcec449beca132a1c5ffb2d/watchfiles-1.1.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:55c7475190662e202c08c6c0f4d9e345a29367438cf8e8037f3155e10a88d5a5", size = 390510, upload-time = "2025-10-14T15:05:41.945Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/a0/ad235642118090f66e7b2f18fd5c42082418404a79205cdfca50b6309c13/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3f53fa183d53a1d7a8852277c92b967ae99c2d4dcee2bfacff8868e6e30b15f7", size = 448408, upload-time = "2025-10-14T15:05:43.385Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/85/97fa10fd5ff3332ae17e7e40e20784e419e28521549780869f1413742e9d/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6aae418a8b323732fa89721d86f39ec8f092fc2af67f4217a2b07fd3e93c6101", size = 458968, upload-time = "2025-10-14T15:05:44.404Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/c2/9059c2e8966ea5ce678166617a7f75ecba6164375f3b288e50a40dc6d489/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f096076119da54a6080e8920cbdaac3dbee667eb91dcc5e5b78840b87415bd44", size = 488096, upload-time = "2025-10-14T15:05:45.398Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/44/d90a9ec8ac309bc26db808a13e7bfc0e4e78b6fc051078a554e132e80160/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:00485f441d183717038ed2e887a7c868154f216877653121068107b227a2f64c", size = 596040, upload-time = "2025-10-14T15:05:46.502Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/68/4e3479b20ca305cfc561db3ed207a8a1c745ee32bf24f2026a129d0ddb6e/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a55f3e9e493158d7bfdb60a1165035f1cf7d320914e7b7ea83fe22c6023b58fc", size = 473847, upload-time = "2025-10-14T15:05:47.484Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4f/55/2af26693fd15165c4ff7857e38330e1b61ab8c37d15dc79118cdba115b7a/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c91ed27800188c2ae96d16e3149f199d62f86c7af5f5f4d2c61a3ed8cd3666c", size = 455072, upload-time = "2025-10-14T15:05:48.928Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/1d/d0d200b10c9311ec25d2273f8aad8c3ef7cc7ea11808022501811208a750/watchfiles-1.1.1-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:311ff15a0bae3714ffb603e6ba6dbfba4065ab60865d15a6ec544133bdb21099", size = 629104, upload-time = "2025-10-14T15:05:49.908Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/bd/fa9bb053192491b3867ba07d2343d9f2252e00811567d30ae8d0f78136fe/watchfiles-1.1.1-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:a916a2932da8f8ab582f242c065f5c81bed3462849ca79ee357dd9551b0e9b01", size = 622112, upload-time = "2025-10-14T15:05:50.941Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "websockets"
|
||||
version = "16.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/04/24/4b2031d72e840ce4c1ccb255f693b15c334757fc50023e4db9537080b8c4/websockets-16.0.tar.gz", hash = "sha256:5f6261a5e56e8d5c42a4497b364ea24d94d9563e8fbd44e78ac40879c60179b5", size = 179346, upload-time = "2026-01-10T09:23:47.181Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/84/7b/bac442e6b96c9d25092695578dda82403c77936104b5682307bd4deb1ad4/websockets-16.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:71c989cbf3254fbd5e84d3bff31e4da39c43f884e64f2551d14bb3c186230f00", size = 177365, upload-time = "2026-01-10T09:22:46.787Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/fe/136ccece61bd690d9c1f715baaeefd953bb2360134de73519d5df19d29ca/websockets-16.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:8b6e209ffee39ff1b6d0fa7bfef6de950c60dfb91b8fcead17da4ee539121a79", size = 175038, upload-time = "2026-01-10T09:22:47.999Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/1e/9771421ac2286eaab95b8575b0cb701ae3663abf8b5e1f64f1fd90d0a673/websockets-16.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:86890e837d61574c92a97496d590968b23c2ef0aeb8a9bc9421d174cd378ae39", size = 175328, upload-time = "2026-01-10T09:22:49.809Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/18/29/71729b4671f21e1eaa5d6573031ab810ad2936c8175f03f97f3ff164c802/websockets-16.0-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:9b5aca38b67492ef518a8ab76851862488a478602229112c4b0d58d63a7a4d5c", size = 184915, upload-time = "2026-01-10T09:22:51.071Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/bb/21c36b7dbbafc85d2d480cd65df02a1dc93bf76d97147605a8e27ff9409d/websockets-16.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e0334872c0a37b606418ac52f6ab9cfd17317ac26365f7f65e203e2d0d0d359f", size = 186152, upload-time = "2026-01-10T09:22:52.224Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4a/34/9bf8df0c0cf88fa7bfe36678dc7b02970c9a7d5e065a3099292db87b1be2/websockets-16.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a0b31e0b424cc6b5a04b8838bbaec1688834b2383256688cf47eb97412531da1", size = 185583, upload-time = "2026-01-10T09:22:53.443Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/88/4dd516068e1a3d6ab3c7c183288404cd424a9a02d585efbac226cb61ff2d/websockets-16.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:485c49116d0af10ac698623c513c1cc01c9446c058a4e61e3bf6c19dff7335a2", size = 184880, upload-time = "2026-01-10T09:22:55.033Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/91/d6/7d4553ad4bf1c0421e1ebd4b18de5d9098383b5caa1d937b63df8d04b565/websockets-16.0-cp312-cp312-win32.whl", hash = "sha256:eaded469f5e5b7294e2bdca0ab06becb6756ea86894a47806456089298813c89", size = 178261, upload-time = "2026-01-10T09:22:56.251Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/f0/f3a17365441ed1c27f850a80b2bc680a0fa9505d733fe152fdf5e98c1c0b/websockets-16.0-cp312-cp312-win_amd64.whl", hash = "sha256:5569417dc80977fc8c2d43a86f78e0a5a22fee17565d78621b6bb264a115d4ea", size = 178693, upload-time = "2026-01-10T09:22:57.478Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/9c/baa8456050d1c1b08dd0ec7346026668cbc6f145ab4e314d707bb845bf0d/websockets-16.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:878b336ac47938b474c8f982ac2f7266a540adc3fa4ad74ae96fea9823a02cc9", size = 177364, upload-time = "2026-01-10T09:22:59.333Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/0c/8811fc53e9bcff68fe7de2bcbe75116a8d959ac699a3200f4847a8925210/websockets-16.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:52a0fec0e6c8d9a784c2c78276a48a2bdf099e4ccc2a4cad53b27718dbfd0230", size = 175039, upload-time = "2026-01-10T09:23:01.171Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/82/39a5f910cb99ec0b59e482971238c845af9220d3ab9fa76dd9162cda9d62/websockets-16.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e6578ed5b6981005df1860a56e3617f14a6c307e6a71b4fff8c48fdc50f3ed2c", size = 175323, upload-time = "2026-01-10T09:23:02.341Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bd/28/0a25ee5342eb5d5f297d992a77e56892ecb65e7854c7898fb7d35e9b33bd/websockets-16.0-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:95724e638f0f9c350bb1c2b0a7ad0e83d9cc0c9259f3ea94e40d7b02a2179ae5", size = 184975, upload-time = "2026-01-10T09:23:03.756Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/66/27ea52741752f5107c2e41fda05e8395a682a1e11c4e592a809a90c6a506/websockets-16.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c0204dc62a89dc9d50d682412c10b3542d748260d743500a85c13cd1ee4bde82", size = 186203, upload-time = "2026-01-10T09:23:05.01Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/e5/8e32857371406a757816a2b471939d51c463509be73fa538216ea52b792a/websockets-16.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:52ac480f44d32970d66763115edea932f1c5b1312de36df06d6b219f6741eed8", size = 185653, upload-time = "2026-01-10T09:23:06.301Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/67/f926bac29882894669368dc73f4da900fcdf47955d0a0185d60103df5737/websockets-16.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6e5a82b677f8f6f59e8dfc34ec06ca6b5b48bc4fcda346acd093694cc2c24d8f", size = 184920, upload-time = "2026-01-10T09:23:07.492Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/a1/3d6ccdcd125b0a42a311bcd15a7f705d688f73b2a22d8cf1c0875d35d34a/websockets-16.0-cp313-cp313-win32.whl", hash = "sha256:abf050a199613f64c886ea10f38b47770a65154dc37181bfaff70c160f45315a", size = 178255, upload-time = "2026-01-10T09:23:09.245Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/ae/90366304d7c2ce80f9b826096a9e9048b4bb760e44d3b873bb272cba696b/websockets-16.0-cp313-cp313-win_amd64.whl", hash = "sha256:3425ac5cf448801335d6fdc7ae1eb22072055417a96cc6b31b3861f455fbc156", size = 178689, upload-time = "2026-01-10T09:23:10.483Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f3/1d/e88022630271f5bd349ed82417136281931e558d628dd52c4d8621b4a0b2/websockets-16.0-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:8cc451a50f2aee53042ac52d2d053d08bf89bcb31ae799cb4487587661c038a0", size = 177406, upload-time = "2026-01-10T09:23:12.178Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/78/e63be1bf0724eeb4616efb1ae1c9044f7c3953b7957799abb5915bffd38e/websockets-16.0-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:daa3b6ff70a9241cf6c7fc9e949d41232d9d7d26fd3522b1ad2b4d62487e9904", size = 175085, upload-time = "2026-01-10T09:23:13.511Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/f4/d3c9220d818ee955ae390cf319a7c7a467beceb24f05ee7aaaa2414345ba/websockets-16.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:fd3cb4adb94a2a6e2b7c0d8d05cb94e6f1c81a0cf9dc2694fb65c7e8d94c42e4", size = 175328, upload-time = "2026-01-10T09:23:14.727Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/63/bc/d3e208028de777087e6fb2b122051a6ff7bbcca0d6df9d9c2bf1dd869ae9/websockets-16.0-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:781caf5e8eee67f663126490c2f96f40906594cb86b408a703630f95550a8c3e", size = 185044, upload-time = "2026-01-10T09:23:15.939Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ad/6e/9a0927ac24bd33a0a9af834d89e0abc7cfd8e13bed17a86407a66773cc0e/websockets-16.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:caab51a72c51973ca21fa8a18bd8165e1a0183f1ac7066a182ff27107b71e1a4", size = 186279, upload-time = "2026-01-10T09:23:17.148Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b9/ca/bf1c68440d7a868180e11be653c85959502efd3a709323230314fda6e0b3/websockets-16.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:19c4dc84098e523fd63711e563077d39e90ec6702aff4b5d9e344a60cb3c0cb1", size = 185711, upload-time = "2026-01-10T09:23:18.372Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/f8/fdc34643a989561f217bb477cbc47a3a07212cbda91c0e4389c43c296ebf/websockets-16.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:a5e18a238a2b2249c9a9235466b90e96ae4795672598a58772dd806edc7ac6d3", size = 184982, upload-time = "2026-01-10T09:23:19.652Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/d1/574fa27e233764dbac9c52730d63fcf2823b16f0856b3329fc6268d6ae4f/websockets-16.0-cp314-cp314-win32.whl", hash = "sha256:a069d734c4a043182729edd3e9f247c3b2a4035415a9172fd0f1b71658a320a8", size = 177915, upload-time = "2026-01-10T09:23:21.458Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/f1/ae6b937bf3126b5134ce1f482365fde31a357c784ac51852978768b5eff4/websockets-16.0-cp314-cp314-win_amd64.whl", hash = "sha256:c0ee0e63f23914732c6d7e0cce24915c48f3f1512ec1d079ed01fc629dab269d", size = 178381, upload-time = "2026-01-10T09:23:22.715Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/06/9b/f791d1db48403e1f0a27577a6beb37afae94254a8c6f08be4a23e4930bc0/websockets-16.0-cp314-cp314t-macosx_10_15_universal2.whl", hash = "sha256:a35539cacc3febb22b8f4d4a99cc79b104226a756aa7400adc722e83b0d03244", size = 177737, upload-time = "2026-01-10T09:23:24.523Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bd/40/53ad02341fa33b3ce489023f635367a4ac98b73570102ad2cdd770dacc9a/websockets-16.0-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:b784ca5de850f4ce93ec85d3269d24d4c82f22b7212023c974c401d4980ebc5e", size = 175268, upload-time = "2026-01-10T09:23:25.781Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/9b/6158d4e459b984f949dcbbb0c5d270154c7618e11c01029b9bbd1bb4c4f9/websockets-16.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:569d01a4e7fba956c5ae4fc988f0d4e187900f5497ce46339c996dbf24f17641", size = 175486, upload-time = "2026-01-10T09:23:27.033Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e5/2d/7583b30208b639c8090206f95073646c2c9ffd66f44df967981a64f849ad/websockets-16.0-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:50f23cdd8343b984957e4077839841146f67a3d31ab0d00e6b824e74c5b2f6e8", size = 185331, upload-time = "2026-01-10T09:23:28.259Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/b0/cce3784eb519b7b5ad680d14b9673a31ab8dcb7aad8b64d81709d2430aa8/websockets-16.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:152284a83a00c59b759697b7f9e9cddf4e3c7861dd0d964b472b70f78f89e80e", size = 186501, upload-time = "2026-01-10T09:23:29.449Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/60/b8ebe4c7e89fb5f6cdf080623c9d92789a53636950f7abacfc33fe2b3135/websockets-16.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:bc59589ab64b0022385f429b94697348a6a234e8ce22544e3681b2e9331b5944", size = 186062, upload-time = "2026-01-10T09:23:31.368Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/88/a8/a080593f89b0138b6cba1b28f8df5673b5506f72879322288b031337c0b8/websockets-16.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:32da954ffa2814258030e5a57bc73a3635463238e797c7375dc8091327434206", size = 185356, upload-time = "2026-01-10T09:23:32.627Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/b6/b9afed2afadddaf5ebb2afa801abf4b0868f42f8539bfe4b071b5266c9fe/websockets-16.0-cp314-cp314t-win32.whl", hash = "sha256:5a4b4cc550cb665dd8a47f868c8d04c8230f857363ad3c9caf7a0c3bf8c61ca6", size = 178085, upload-time = "2026-01-10T09:23:33.816Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/3e/28135a24e384493fa804216b79a6a6759a38cc4ff59118787b9fb693df93/websockets-16.0-cp314-cp314t-win_amd64.whl", hash = "sha256:b14dc141ed6d2dde437cddb216004bcac6a1df0935d79656387bd41632ba0bbd", size = 178531, upload-time = "2026-01-10T09:23:35.016Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/28/258ebab549c2bf3e64d2b0217b973467394a9cea8c42f70418ca2c5d0d2e/websockets-16.0-py3-none-any.whl", hash = "sha256:1637db62fad1dc833276dded54215f2c7fa46912301a24bd94d45d46a011ceec", size = 171598, upload-time = "2026-01-10T09:23:45.395Z" },
|
||||
]
|
||||
12
buildbot-ext/buildbot_autoscale_ext/__init__.py
Normal file
12
buildbot-ext/buildbot_autoscale_ext/__init__.py
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
"""Buildbot autoscale extension package."""
|
||||
|
||||
from .configurator import AutoscaleConfigurator
|
||||
from .settings import AutoscaleSettings
|
||||
from .steps import CapacityGateStep, CapacityReleaseStep
|
||||
|
||||
__all__ = [
|
||||
"AutoscaleConfigurator",
|
||||
"AutoscaleSettings",
|
||||
"CapacityGateStep",
|
||||
"CapacityReleaseStep",
|
||||
]
|
||||
169
buildbot-ext/buildbot_autoscale_ext/client.py
Normal file
169
buildbot-ext/buildbot_autoscale_ext/client.py
Normal file
|
|
@ -0,0 +1,169 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import http.client
|
||||
import json
|
||||
import random
|
||||
import socket
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RetryPolicy:
|
||||
max_attempts: int
|
||||
base_seconds: float
|
||||
max_seconds: float
|
||||
|
||||
|
||||
class DaemonError(RuntimeError):
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
path: str,
|
||||
status: int | None = None,
|
||||
response: dict[str, Any] | None = None,
|
||||
cause: Exception | None = None,
|
||||
) -> None:
|
||||
super().__init__(message)
|
||||
self.path = path
|
||||
self.status = status
|
||||
self.response = response
|
||||
self.cause = cause
|
||||
|
||||
|
||||
class UnixSocketHTTPConnection(http.client.HTTPConnection):
|
||||
def __init__(self, socket_path: str, timeout: float) -> None:
|
||||
super().__init__(host="localhost", port=0, timeout=timeout)
|
||||
self._socket_path = socket_path
|
||||
|
||||
def connect(self) -> None:
|
||||
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
self.sock.settimeout(self.timeout)
|
||||
self.sock.connect(self._socket_path)
|
||||
|
||||
|
||||
class DaemonClient:
|
||||
def __init__(self, socket_path: str, retry_policy: RetryPolicy) -> None:
|
||||
self._socket_path = socket_path
|
||||
self._retry = retry_policy
|
||||
|
||||
def post_json(
|
||||
self,
|
||||
path: str,
|
||||
body: dict[str, Any],
|
||||
timeout_seconds: float,
|
||||
retryable_statuses: set[int],
|
||||
) -> dict[str, Any]:
|
||||
return self._request_json(
|
||||
method="POST",
|
||||
path=path,
|
||||
timeout_seconds=timeout_seconds,
|
||||
retryable_statuses=retryable_statuses,
|
||||
body=body,
|
||||
)
|
||||
|
||||
def get_json(
|
||||
self,
|
||||
path: str,
|
||||
timeout_seconds: float,
|
||||
retryable_statuses: set[int],
|
||||
) -> dict[str, Any]:
|
||||
return self._request_json(
|
||||
method="GET",
|
||||
path=path,
|
||||
timeout_seconds=timeout_seconds,
|
||||
retryable_statuses=retryable_statuses,
|
||||
body=None,
|
||||
)
|
||||
|
||||
def _request_json(
|
||||
self,
|
||||
*,
|
||||
method: str,
|
||||
path: str,
|
||||
timeout_seconds: float,
|
||||
retryable_statuses: set[int],
|
||||
body: dict[str, Any] | None,
|
||||
) -> dict[str, Any]:
|
||||
last_error: DaemonError | None = None
|
||||
for attempt in range(1, self._retry.max_attempts + 1):
|
||||
try:
|
||||
payload = json.dumps(body).encode("utf-8") if body is not None else None
|
||||
response_body, status = self._raw_request(
|
||||
method=method,
|
||||
path=path,
|
||||
timeout_seconds=timeout_seconds,
|
||||
payload=payload,
|
||||
)
|
||||
parsed = self._parse_json(response_body, path)
|
||||
|
||||
if 200 <= status < 300:
|
||||
return parsed
|
||||
|
||||
err = DaemonError(
|
||||
f"daemon returned HTTP {status} for {method} {path}",
|
||||
path=path,
|
||||
status=status,
|
||||
response=parsed,
|
||||
)
|
||||
|
||||
retryable = status in retryable_statuses
|
||||
if not retryable:
|
||||
raise err
|
||||
last_error = err
|
||||
except (ConnectionRefusedError, FileNotFoundError, TimeoutError, OSError) as exc:
|
||||
last_error = DaemonError(
|
||||
f"daemon transport error during {method} {path}: {exc}",
|
||||
path=path,
|
||||
cause=exc,
|
||||
)
|
||||
except DaemonError:
|
||||
raise
|
||||
|
||||
if attempt < self._retry.max_attempts:
|
||||
self._sleep_backoff(attempt)
|
||||
|
||||
assert last_error is not None
|
||||
raise last_error
|
||||
|
||||
def _raw_request(
|
||||
self,
|
||||
*,
|
||||
method: str,
|
||||
path: str,
|
||||
timeout_seconds: float,
|
||||
payload: bytes | None,
|
||||
) -> tuple[bytes, int]:
|
||||
conn = UnixSocketHTTPConnection(self._socket_path, timeout=timeout_seconds)
|
||||
headers = {"Accept": "application/json"}
|
||||
if payload is not None:
|
||||
headers["Content-Type"] = "application/json"
|
||||
try:
|
||||
conn.request(method=method, url=path, body=payload, headers=headers)
|
||||
response = conn.getresponse()
|
||||
data = response.read()
|
||||
return data, response.status
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
@staticmethod
|
||||
def _parse_json(raw: bytes, path: str) -> dict[str, Any]:
|
||||
if not raw:
|
||||
return {}
|
||||
try:
|
||||
data = json.loads(raw.decode("utf-8"))
|
||||
except json.JSONDecodeError as exc:
|
||||
raise DaemonError(
|
||||
f"daemon returned invalid JSON for {path}",
|
||||
path=path,
|
||||
cause=exc,
|
||||
) from exc
|
||||
if not isinstance(data, dict):
|
||||
raise DaemonError(f"daemon returned non-object JSON for {path}", path=path)
|
||||
return data
|
||||
|
||||
def _sleep_backoff(self, attempt: int) -> None:
|
||||
ceiling = min(self._retry.max_seconds, self._retry.base_seconds * (2 ** (attempt - 1)))
|
||||
time.sleep(random.uniform(0.0, ceiling))
|
||||
66
buildbot-ext/buildbot_autoscale_ext/configurator.py
Normal file
66
buildbot-ext/buildbot_autoscale_ext/configurator.py
Normal file
|
|
@ -0,0 +1,66 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from buildbot.configurators import ConfiguratorBase
|
||||
|
||||
from .settings import AutoscaleSettings
|
||||
from .steps import CapacityGateStep, CapacityReleaseStep
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AutoscaleConfigurator(ConfiguratorBase):
|
||||
def __init__(self, settings: AutoscaleSettings) -> None:
|
||||
super().__init__()
|
||||
self.settings = settings
|
||||
|
||||
def configure(self, config_dict: dict[str, Any]) -> None:
|
||||
builders = config_dict.get("builders", [])
|
||||
patched: list[str] = []
|
||||
|
||||
for builder in builders:
|
||||
name = getattr(builder, "name", "")
|
||||
if not isinstance(name, str) or not name.endswith("/nix-build"):
|
||||
continue
|
||||
|
||||
factory = getattr(builder, "factory", None)
|
||||
steps = getattr(factory, "steps", None)
|
||||
if factory is None or not isinstance(steps, list):
|
||||
log.warning("Skipping builder with unrecognized factory shape: %s", name)
|
||||
continue
|
||||
|
||||
gate = CapacityGateStep(
|
||||
name="Ensure remote builder capacity",
|
||||
daemon_socket=self.settings.daemon_socket,
|
||||
system_property=self.settings.system_property,
|
||||
default_system=self.settings.default_system,
|
||||
reserve_timeout_seconds=self.settings.reserve_timeout_seconds,
|
||||
poll_interval_seconds=self.settings.poll_interval_seconds,
|
||||
retry_max_attempts=self.settings.retry_max_attempts,
|
||||
retry_base_seconds=self.settings.retry_base_seconds,
|
||||
retry_max_seconds=self.settings.retry_max_seconds,
|
||||
haltOnFailure=True,
|
||||
flunkOnFailure=True,
|
||||
warnOnFailure=False,
|
||||
)
|
||||
steps.insert(0, gate)
|
||||
|
||||
if self.settings.release_on_finish:
|
||||
steps.append(
|
||||
CapacityReleaseStep(
|
||||
name="Release autoscaler reservation",
|
||||
daemon_socket=self.settings.daemon_socket,
|
||||
retry_max_attempts=self.settings.retry_max_attempts,
|
||||
retry_base_seconds=self.settings.retry_base_seconds,
|
||||
retry_max_seconds=self.settings.retry_max_seconds,
|
||||
alwaysRun=True,
|
||||
flunkOnFailure=False,
|
||||
warnOnFailure=True,
|
||||
)
|
||||
)
|
||||
|
||||
patched.append(name)
|
||||
|
||||
log.info("AutoscaleConfigurator patched builders: %s", patched)
|
||||
14
buildbot-ext/buildbot_autoscale_ext/settings.py
Normal file
14
buildbot-ext/buildbot_autoscale_ext/settings.py
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class AutoscaleSettings:
|
||||
daemon_socket: str
|
||||
system_property: str = "system"
|
||||
default_system: str = "x86_64-linux"
|
||||
reserve_timeout_seconds: int = 900
|
||||
poll_interval_seconds: float = 5.0
|
||||
retry_max_attempts: int = 5
|
||||
retry_base_seconds: float = 0.5
|
||||
retry_max_seconds: float = 5.0
|
||||
release_on_finish: bool = True
|
||||
199
buildbot-ext/buildbot_autoscale_ext/steps.py
Normal file
199
buildbot-ext/buildbot_autoscale_ext/steps.py
Normal file
|
|
@ -0,0 +1,199 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from typing import Any, cast
|
||||
|
||||
from buildbot.plugins import util
|
||||
from buildbot.process import buildstep
|
||||
from twisted.internet import defer, reactor
|
||||
from twisted.internet.interfaces import IReactorTime
|
||||
from twisted.internet.task import deferLater
|
||||
from twisted.internet.threads import deferToThread
|
||||
|
||||
from .client import DaemonClient, DaemonError, RetryPolicy
|
||||
from .telemetry import phase_message
|
||||
|
||||
|
||||
class CapacityGateStep(buildstep.BuildStep):
|
||||
renderables = ()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
daemon_socket: str,
|
||||
system_property: str,
|
||||
default_system: str,
|
||||
reserve_timeout_seconds: int,
|
||||
poll_interval_seconds: float,
|
||||
retry_max_attempts: int,
|
||||
retry_base_seconds: float,
|
||||
retry_max_seconds: float = 5.0,
|
||||
**kwargs: object,
|
||||
) -> None:
|
||||
super().__init__(**kwargs)
|
||||
self._system_property = system_property
|
||||
self._default_system = default_system
|
||||
self._reserve_timeout_seconds = reserve_timeout_seconds
|
||||
self._poll_interval_seconds = poll_interval_seconds
|
||||
self._client = DaemonClient(
|
||||
socket_path=daemon_socket,
|
||||
retry_policy=RetryPolicy(
|
||||
max_attempts=retry_max_attempts,
|
||||
base_seconds=retry_base_seconds,
|
||||
max_seconds=retry_max_seconds,
|
||||
),
|
||||
)
|
||||
|
||||
def _determine_system(self) -> str:
|
||||
if self.build is None:
|
||||
return self._default_system
|
||||
props = self.build.getProperties()
|
||||
value = props.getProperty(self._system_property)
|
||||
if value:
|
||||
return str(value)
|
||||
return self._default_system
|
||||
|
||||
def run(self) -> defer.Deferred[int]:
|
||||
return defer.ensureDeferred(self._run())
|
||||
|
||||
async def _run(self) -> int:
|
||||
system = self._determine_system()
|
||||
start = time.monotonic()
|
||||
|
||||
try:
|
||||
reserve = await deferToThread(
|
||||
self._client.post_json,
|
||||
"/v1/reservations",
|
||||
{
|
||||
"system": system,
|
||||
"reason": "buildbot-nix-build",
|
||||
"build_id": getattr(self.build, "buildid", None),
|
||||
},
|
||||
10.0,
|
||||
{429, 500, 502, 503, 504},
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
await self._add_log("autoscale_gate_reserve_error", f"reserve failed: {exc}")
|
||||
self.descriptionDone = ["capacity reservation failed after retries"]
|
||||
return util.FAILURE
|
||||
|
||||
reservation_id = str(reserve["reservation_id"])
|
||||
self._set_property("autoscale_reservation_id", reservation_id)
|
||||
|
||||
last_phase: str | None = "pending"
|
||||
last_reason: str | None = None
|
||||
|
||||
while True:
|
||||
elapsed = time.monotonic() - start
|
||||
self.descriptionSuffix = [f"phase={last_phase} elapsed={int(elapsed)}s"]
|
||||
if elapsed > self._reserve_timeout_seconds:
|
||||
await self._add_log(
|
||||
"autoscale_gate_timeout",
|
||||
f"capacity wait timeout {phase_message(last_phase, last_reason)}",
|
||||
)
|
||||
self.descriptionDone = ["capacity wait timeout"]
|
||||
return util.FAILURE
|
||||
|
||||
try:
|
||||
status = await deferToThread(
|
||||
self._client.get_json,
|
||||
f"/v1/reservations/{reservation_id}",
|
||||
10.0,
|
||||
{429, 500, 502, 503, 504},
|
||||
)
|
||||
except DaemonError as exc:
|
||||
last_reason = str(exc)
|
||||
await deferLater(
|
||||
cast(IReactorTime, reactor),
|
||||
self._poll_interval_seconds,
|
||||
lambda: None,
|
||||
)
|
||||
continue
|
||||
|
||||
phase = str(status.get("phase", "pending"))
|
||||
reason = status.get("reason")
|
||||
last_phase = phase
|
||||
last_reason = str(reason) if reason is not None else None
|
||||
|
||||
if phase == "ready":
|
||||
slot = str(status["slot"])
|
||||
instance_id = str(status["instance_id"])
|
||||
waited = int(time.monotonic() - start)
|
||||
self._set_property("autoscale_slot", slot)
|
||||
self._set_property("autoscale_instance_id", instance_id)
|
||||
self._set_property("autoscale_wait_seconds", waited)
|
||||
self.descriptionDone = [f"capacity ready in {waited}s"]
|
||||
self.descriptionSuffix = [f"phase={phase}"]
|
||||
return util.SUCCESS
|
||||
|
||||
if phase in {"failed", "expired", "released"}:
|
||||
await self._add_log(
|
||||
"autoscale_gate_failure",
|
||||
f"capacity gate terminal {phase_message(last_phase, last_reason)}",
|
||||
)
|
||||
self.descriptionDone = [f"autoscaler reservation {phase}"]
|
||||
self.descriptionSuffix = [f"phase={phase}"]
|
||||
return util.FAILURE
|
||||
|
||||
await deferLater(
|
||||
cast(IReactorTime, reactor),
|
||||
self._poll_interval_seconds,
|
||||
lambda: None,
|
||||
)
|
||||
|
||||
def _set_property(self, name: str, value: object) -> None:
|
||||
if self.build is None:
|
||||
return
|
||||
self.build.setProperty(name, value, "autoscale")
|
||||
|
||||
async def _add_log(self, name: str, message: str) -> None:
|
||||
log = cast(Any, await self.addLog(name))
|
||||
log.addStderr(f"{message}\n")
|
||||
|
||||
|
||||
class CapacityReleaseStep(buildstep.BuildStep):
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
daemon_socket: str,
|
||||
retry_max_attempts: int,
|
||||
retry_base_seconds: float,
|
||||
retry_max_seconds: float = 5.0,
|
||||
**kwargs: object,
|
||||
) -> None:
|
||||
super().__init__(**kwargs)
|
||||
self._client = DaemonClient(
|
||||
socket_path=daemon_socket,
|
||||
retry_policy=RetryPolicy(
|
||||
max_attempts=retry_max_attempts,
|
||||
base_seconds=retry_base_seconds,
|
||||
max_seconds=retry_max_seconds,
|
||||
),
|
||||
)
|
||||
|
||||
def run(self) -> defer.Deferred[int]:
|
||||
return defer.ensureDeferred(self._run())
|
||||
|
||||
async def _run(self) -> int:
|
||||
if self.build is None:
|
||||
return util.SKIPPED
|
||||
|
||||
reservation_id = self.build.getProperty("autoscale_reservation_id")
|
||||
if not reservation_id:
|
||||
return util.SKIPPED
|
||||
|
||||
try:
|
||||
await deferToThread(
|
||||
self._client.post_json,
|
||||
f"/v1/reservations/{reservation_id}/release",
|
||||
{},
|
||||
10.0,
|
||||
{429, 500, 502, 503, 504},
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
log = cast(Any, await self.addLog("autoscale_release_error"))
|
||||
log.addStderr(f"release failed: {exc}\n")
|
||||
return util.WARNINGS
|
||||
|
||||
self.descriptionDone = ["autoscaler reservation released"]
|
||||
return util.SUCCESS
|
||||
20
buildbot-ext/buildbot_autoscale_ext/telemetry.py
Normal file
20
buildbot-ext/buildbot_autoscale_ext/telemetry.py
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
def get_logger(name: str) -> logging.Logger:
|
||||
return logging.getLogger(name)
|
||||
|
||||
|
||||
def event(logger: logging.Logger, level: int, name: str, **fields: object) -> None:
|
||||
logger.log(level, "%s %s", name, " ".join(f"{key}={value!r}" for key, value in fields.items()))
|
||||
|
||||
|
||||
def phase_message(phase: str | None, reason: str | None) -> str:
|
||||
details: list[str] = []
|
||||
if phase:
|
||||
details.append(f"phase={phase}")
|
||||
if reason:
|
||||
details.append(f"reason={reason}")
|
||||
return " ".join(details) if details else "no daemon details"
|
||||
11
buildbot-ext/buildbot_autoscale_ext/tests/__init__.py
Normal file
11
buildbot-ext/buildbot_autoscale_ext/tests/__init__.py
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
"""Tests for buildbot_autoscale_ext."""
|
||||
|
||||
import asyncio
|
||||
from contextlib import suppress
|
||||
|
||||
from twisted.internet import asyncioreactor
|
||||
|
||||
TEST_LOOP = asyncio.new_event_loop()
|
||||
|
||||
with suppress(Exception):
|
||||
asyncioreactor.install(TEST_LOOP)
|
||||
212
buildbot-ext/buildbot_autoscale_ext/tests/test_client.py
Normal file
212
buildbot-ext/buildbot_autoscale_ext/tests/test_client.py
Normal file
|
|
@ -0,0 +1,212 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import socketserver
|
||||
import tempfile
|
||||
import threading
|
||||
from collections.abc import Callable
|
||||
from contextlib import suppress
|
||||
from dataclasses import dataclass
|
||||
from http import HTTPStatus
|
||||
from http.server import BaseHTTPRequestHandler
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
|
||||
from buildbot_autoscale_ext.client import (
|
||||
DaemonClient,
|
||||
DaemonError,
|
||||
RetryPolicy,
|
||||
UnixSocketHTTPConnection,
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ServerState:
|
||||
post_count: int = 0
|
||||
get_count: int = 0
|
||||
|
||||
|
||||
class _Handler(BaseHTTPRequestHandler):
|
||||
server: _UnixHTTPServer
|
||||
|
||||
def do_GET(self) -> None: # noqa: N802
|
||||
self.server.state.get_count += 1
|
||||
status, body = self.server.on_get(self.path, self.server.state.get_count)
|
||||
self._send(status, body)
|
||||
|
||||
def do_POST(self) -> None: # noqa: N802
|
||||
self.server.state.post_count += 1
|
||||
size = int(self.headers.get("Content-Length", "0"))
|
||||
raw = self.rfile.read(size) if size else b"{}"
|
||||
payload = json.loads(raw.decode("utf-8"))
|
||||
status, body = self.server.on_post(self.path, payload, self.server.state.post_count)
|
||||
self._send(status, body)
|
||||
|
||||
def log_message(self, format: str, *args: object) -> None:
|
||||
del format, args
|
||||
|
||||
def _send(self, status: int, body: dict[str, Any]) -> None:
|
||||
encoded = json.dumps(body).encode("utf-8")
|
||||
self.send_response(status)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
self.send_header("Content-Length", str(len(encoded)))
|
||||
self.end_headers()
|
||||
self.wfile.write(encoded)
|
||||
|
||||
|
||||
class _UnixHTTPServer(socketserver.UnixStreamServer):
|
||||
def __init__(
|
||||
self,
|
||||
socket_path: str,
|
||||
*,
|
||||
on_get: Callable[[str, int], tuple[int, dict[str, Any]]],
|
||||
on_post: Callable[[str, dict[str, Any], int], tuple[int, dict[str, Any]]],
|
||||
) -> None:
|
||||
self.on_get = on_get
|
||||
self.on_post = on_post
|
||||
self.state = ServerState()
|
||||
super().__init__(socket_path, _Handler)
|
||||
|
||||
|
||||
class FakeDaemon:
|
||||
def __init__(
|
||||
self,
|
||||
socket_path: str,
|
||||
*,
|
||||
on_get: Callable[[str, int], tuple[int, dict[str, Any]]],
|
||||
on_post: Callable[[str, dict[str, Any], int], tuple[int, dict[str, Any]]],
|
||||
) -> None:
|
||||
self._socket_path = socket_path
|
||||
self._server = _UnixHTTPServer(socket_path, on_get=on_get, on_post=on_post)
|
||||
self._thread = threading.Thread(target=self._server.serve_forever, daemon=True)
|
||||
|
||||
def __enter__(self) -> FakeDaemon:
|
||||
self._thread.start()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type: object, exc: object, tb: object) -> None:
|
||||
del exc_type, exc, tb
|
||||
self._server.shutdown()
|
||||
self._server.server_close()
|
||||
with suppress(FileNotFoundError):
|
||||
os.unlink(self._socket_path)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def socket_path() -> str:
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
yield str(Path(tmp) / "daemon.sock")
|
||||
|
||||
|
||||
def _client(socket_path: str, attempts: int = 3) -> DaemonClient:
|
||||
return DaemonClient(
|
||||
socket_path=socket_path,
|
||||
retry_policy=RetryPolicy(max_attempts=attempts, base_seconds=0.001, max_seconds=0.01),
|
||||
)
|
||||
|
||||
|
||||
def test_post_json_success(socket_path: str) -> None:
|
||||
with FakeDaemon(
|
||||
socket_path,
|
||||
on_get=lambda _p, _a: (HTTPStatus.OK, {}),
|
||||
on_post=lambda _p, payload, _a: (HTTPStatus.OK, {"echo": payload["system"]}),
|
||||
):
|
||||
response = _client(socket_path).post_json(
|
||||
"/v1/reservations",
|
||||
{"system": "x86_64-linux"},
|
||||
timeout_seconds=1.0,
|
||||
retryable_statuses={429, 500, 503},
|
||||
)
|
||||
|
||||
assert response == {"echo": "x86_64-linux"}
|
||||
|
||||
|
||||
def test_get_json_success(socket_path: str) -> None:
|
||||
with FakeDaemon(
|
||||
socket_path,
|
||||
on_get=lambda _p, _a: (HTTPStatus.OK, {"phase": "ready"}),
|
||||
on_post=lambda _p, _payload, _a: (HTTPStatus.OK, {}),
|
||||
):
|
||||
response = _client(socket_path).get_json(
|
||||
"/v1/reservations/r1",
|
||||
timeout_seconds=1.0,
|
||||
retryable_statuses={429, 500, 503},
|
||||
)
|
||||
|
||||
assert response == {"phase": "ready"}
|
||||
|
||||
|
||||
def test_transient_503_retries_then_raises(socket_path: str) -> None:
|
||||
with (
|
||||
FakeDaemon(
|
||||
socket_path,
|
||||
on_get=lambda _p, _a: (HTTPStatus.SERVICE_UNAVAILABLE, {"error": "busy"}),
|
||||
on_post=lambda _p, _payload, _a: (HTTPStatus.OK, {}),
|
||||
) as daemon,
|
||||
pytest.raises(DaemonError) as exc,
|
||||
):
|
||||
_client(socket_path, attempts=3).get_json(
|
||||
"/v1/reservations/r1",
|
||||
timeout_seconds=1.0,
|
||||
retryable_statuses={429, 500, 502, 503, 504},
|
||||
)
|
||||
|
||||
assert exc.value.status == HTTPStatus.SERVICE_UNAVAILABLE
|
||||
assert daemon._server.state.get_count == 3
|
||||
|
||||
|
||||
def test_400_not_retried(socket_path: str) -> None:
|
||||
with (
|
||||
FakeDaemon(
|
||||
socket_path,
|
||||
on_get=lambda _p, _a: (HTTPStatus.BAD_REQUEST, {"error": "bad"}),
|
||||
on_post=lambda _p, _payload, _a: (HTTPStatus.OK, {}),
|
||||
) as daemon,
|
||||
pytest.raises(DaemonError) as exc,
|
||||
):
|
||||
_client(socket_path, attempts=5).get_json(
|
||||
"/v1/reservations/r1",
|
||||
timeout_seconds=1.0,
|
||||
retryable_statuses={429, 500, 502, 503, 504},
|
||||
)
|
||||
|
||||
assert exc.value.status == HTTPStatus.BAD_REQUEST
|
||||
assert daemon._server.state.get_count == 1
|
||||
|
||||
|
||||
def test_connection_refused_retries_then_raises(
|
||||
socket_path: str,
|
||||
monkeypatch: pytest.MonkeyPatch,
|
||||
) -> None:
|
||||
def _boom(self: UnixSocketHTTPConnection) -> None:
|
||||
raise ConnectionRefusedError("refused")
|
||||
|
||||
monkeypatch.setattr(UnixSocketHTTPConnection, "connect", _boom)
|
||||
|
||||
with pytest.raises(DaemonError):
|
||||
_client(socket_path, attempts=3).get_json(
|
||||
"/v1/reservations/r1",
|
||||
timeout_seconds=1.0,
|
||||
retryable_statuses={429, 500, 502, 503, 504},
|
||||
)
|
||||
|
||||
|
||||
def test_backoff_attempts_at_least_two(socket_path: str) -> None:
|
||||
with (
|
||||
FakeDaemon(
|
||||
socket_path,
|
||||
on_get=lambda _p, _a: (HTTPStatus.SERVICE_UNAVAILABLE, {"error": "busy"}),
|
||||
on_post=lambda _p, _payload, _a: (HTTPStatus.OK, {}),
|
||||
) as daemon,
|
||||
pytest.raises(DaemonError),
|
||||
):
|
||||
_client(socket_path, attempts=2).get_json(
|
||||
"/v1/reservations/r1",
|
||||
timeout_seconds=1.0,
|
||||
retryable_statuses={429, 500, 502, 503, 504},
|
||||
)
|
||||
|
||||
assert daemon._server.state.get_count >= 2
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
import pytest
|
||||
|
||||
from buildbot_autoscale_ext.configurator import AutoscaleConfigurator
|
||||
from buildbot_autoscale_ext.settings import AutoscaleSettings
|
||||
from buildbot_autoscale_ext.steps import CapacityGateStep, CapacityReleaseStep
|
||||
|
||||
|
||||
@dataclass
|
||||
class FakeFactory:
|
||||
steps: list[object] = field(default_factory=list)
|
||||
|
||||
|
||||
@dataclass
|
||||
class FakeBuilder:
|
||||
name: str
|
||||
factory: FakeFactory
|
||||
|
||||
|
||||
def test_patches_nix_builders() -> None:
|
||||
cfg = {
|
||||
"builders": [
|
||||
FakeBuilder("proj/nix-build", FakeFactory(["original"])),
|
||||
FakeBuilder("proj/eval", FakeFactory(["eval"])),
|
||||
]
|
||||
}
|
||||
|
||||
AutoscaleConfigurator(AutoscaleSettings(daemon_socket="/tmp/daemon.sock")).configure(cfg)
|
||||
|
||||
patched_steps = cfg["builders"][0].factory.steps
|
||||
assert isinstance(patched_steps[0], CapacityGateStep)
|
||||
assert patched_steps[1] == "original"
|
||||
assert isinstance(patched_steps[-1], CapacityReleaseStep)
|
||||
|
||||
untouched_steps = cfg["builders"][1].factory.steps
|
||||
assert untouched_steps == ["eval"]
|
||||
|
||||
|
||||
def test_empty_builders_no_error() -> None:
|
||||
cfg = {"builders": []}
|
||||
AutoscaleConfigurator(AutoscaleSettings(daemon_socket="/tmp/daemon.sock")).configure(cfg)
|
||||
|
||||
|
||||
def test_startup_log_contains_patched_names(caplog: pytest.LogCaptureFixture) -> None:
|
||||
caplog.set_level(logging.INFO)
|
||||
cfg = {
|
||||
"builders": [
|
||||
FakeBuilder("one/nix-build", FakeFactory()),
|
||||
FakeBuilder("two/eval", FakeFactory()),
|
||||
]
|
||||
}
|
||||
|
||||
AutoscaleConfigurator(AutoscaleSettings(daemon_socket="/tmp/daemon.sock")).configure(cfg)
|
||||
|
||||
assert "one/nix-build" in caplog.text
|
||||
337
buildbot-ext/buildbot_autoscale_ext/tests/test_steps.py
Normal file
337
buildbot-ext/buildbot_autoscale_ext/tests/test_steps.py
Normal file
|
|
@ -0,0 +1,337 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import socketserver
|
||||
import tempfile
|
||||
import threading
|
||||
from collections.abc import Callable
|
||||
from contextlib import suppress
|
||||
from http import HTTPStatus
|
||||
from http.server import BaseHTTPRequestHandler
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
from buildbot.plugins import util
|
||||
from twisted.internet import defer
|
||||
|
||||
from buildbot_autoscale_ext.steps import CapacityGateStep, CapacityReleaseStep
|
||||
|
||||
|
||||
class FakeProperties:
|
||||
def __init__(self, data: dict[str, Any]) -> None:
|
||||
self._data = data
|
||||
|
||||
def getProperty(self, name: str) -> Any:
|
||||
return self._data.get(name)
|
||||
|
||||
|
||||
class FakeBuild:
|
||||
def __init__(self) -> None:
|
||||
self.buildid = 42
|
||||
self._props: dict[str, Any] = {}
|
||||
|
||||
def getProperties(self) -> FakeProperties:
|
||||
return FakeProperties(self._props)
|
||||
|
||||
def setProperty(self, key: str, value: Any, _source: str) -> None:
|
||||
self._props[key] = value
|
||||
|
||||
def getProperty(self, key: str) -> Any:
|
||||
return self._props.get(key)
|
||||
|
||||
|
||||
class FakeLog:
|
||||
def __init__(self) -> None:
|
||||
self.stderr: list[str] = []
|
||||
|
||||
def addStderr(self, text: str) -> None:
|
||||
self.stderr.append(text)
|
||||
|
||||
|
||||
class _Handler(BaseHTTPRequestHandler):
|
||||
server: _UnixHTTPServer
|
||||
|
||||
def do_GET(self) -> None: # noqa: N802
|
||||
self.server.get_count += 1
|
||||
status, body = self.server.on_get(self.path, self.server.get_count)
|
||||
self._send(status, body)
|
||||
|
||||
def do_POST(self) -> None: # noqa: N802
|
||||
self.server.post_count += 1
|
||||
size = int(self.headers.get("Content-Length", "0"))
|
||||
raw = self.rfile.read(size) if size else b"{}"
|
||||
payload = json.loads(raw.decode("utf-8"))
|
||||
status, body = self.server.on_post(self.path, payload, self.server.post_count)
|
||||
self._send(status, body)
|
||||
|
||||
def _send(self, status: int, body: dict[str, Any]) -> None:
|
||||
data = json.dumps(body).encode("utf-8")
|
||||
self.send_response(status)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
self.send_header("Content-Length", str(len(data)))
|
||||
self.end_headers()
|
||||
self.wfile.write(data)
|
||||
|
||||
def log_message(self, format: str, *args: object) -> None:
|
||||
del format, args
|
||||
|
||||
|
||||
class _UnixHTTPServer(socketserver.UnixStreamServer):
|
||||
def __init__(
|
||||
self,
|
||||
socket_path: str,
|
||||
*,
|
||||
on_get: Callable[[str, int], tuple[int, dict[str, Any]]],
|
||||
on_post: Callable[[str, dict[str, Any], int], tuple[int, dict[str, Any]]],
|
||||
) -> None:
|
||||
self.on_get = on_get
|
||||
self.on_post = on_post
|
||||
self.get_count = 0
|
||||
self.post_count = 0
|
||||
super().__init__(socket_path, _Handler)
|
||||
|
||||
|
||||
class FakeDaemon:
|
||||
def __init__(
|
||||
self,
|
||||
socket_path: str,
|
||||
*,
|
||||
on_get: Callable[[str, int], tuple[int, dict[str, Any]]],
|
||||
on_post: Callable[[str, dict[str, Any], int], tuple[int, dict[str, Any]]],
|
||||
) -> None:
|
||||
self._socket_path = socket_path
|
||||
self._server = _UnixHTTPServer(socket_path, on_get=on_get, on_post=on_post)
|
||||
self._thread = threading.Thread(target=self._server.serve_forever, daemon=True)
|
||||
|
||||
def __enter__(self) -> FakeDaemon:
|
||||
self._thread.start()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type: object, exc: object, tb: object) -> None:
|
||||
del exc_type, exc, tb
|
||||
self._server.shutdown()
|
||||
self._server.server_close()
|
||||
with suppress(FileNotFoundError):
|
||||
os.unlink(self._socket_path)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def socket_path() -> str:
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
yield str(Path(tmp) / "daemon.sock")
|
||||
|
||||
|
||||
def _attach_build(step: Any, build: FakeBuild) -> None:
|
||||
object.__setattr__(build, "master", None)
|
||||
object.__setattr__(step, "build", build)
|
||||
object.__setattr__(step, "master", None)
|
||||
|
||||
async def _add_log(_name: str) -> FakeLog:
|
||||
return FakeLog()
|
||||
|
||||
object.__setattr__(step, "addLog", _add_log)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _patch_twisted_waits(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||
from buildbot.config import errors as config_errors
|
||||
from buildbot.process import buildstep as buildstep_module
|
||||
|
||||
import buildbot_autoscale_ext.steps as steps_mod
|
||||
|
||||
def _defer_to_thread(func: Any, *args: object, **kwargs: object) -> defer.Deferred[object]:
|
||||
try:
|
||||
return defer.succeed(func(*args, **kwargs))
|
||||
except Exception as exc: # noqa: BLE001
|
||||
return defer.fail(exc)
|
||||
|
||||
def _defer_later(
|
||||
_clock: object,
|
||||
_seconds: float,
|
||||
callback: Any,
|
||||
*args: object,
|
||||
**kwargs: object,
|
||||
) -> defer.Deferred[object]:
|
||||
return defer.succeed(callback(*args, **kwargs))
|
||||
|
||||
monkeypatch.setattr(steps_mod, "deferToThread", _defer_to_thread)
|
||||
monkeypatch.setattr(steps_mod, "deferLater", _defer_later)
|
||||
monkeypatch.setattr(config_errors, "_errors", None, raising=False)
|
||||
monkeypatch.setattr(buildstep_module.config, "error", lambda *_a, **_k: None)
|
||||
|
||||
|
||||
def _run_step(step: Any) -> int:
|
||||
deferred = step.run()
|
||||
out: list[int] = []
|
||||
failures: list[defer.Failure] = []
|
||||
deferred.addCallbacks(lambda value: out.append(value), lambda err: failures.append(err))
|
||||
if failures:
|
||||
failures[0].raiseException()
|
||||
return out[0]
|
||||
|
||||
|
||||
def test_gate_success_pending_then_ready(socket_path: str) -> None:
|
||||
with FakeDaemon(
|
||||
socket_path,
|
||||
on_post=lambda _p, _payload, _n: (
|
||||
HTTPStatus.OK,
|
||||
{
|
||||
"reservation_id": "r1",
|
||||
"phase": "pending",
|
||||
"created_at": "now",
|
||||
"expires_at": "soon",
|
||||
},
|
||||
),
|
||||
on_get=lambda _p, n: (
|
||||
HTTPStatus.OK,
|
||||
{"reservation_id": "r1", "phase": "pending"}
|
||||
if n == 1
|
||||
else {
|
||||
"reservation_id": "r1",
|
||||
"phase": "ready",
|
||||
"slot": "slot-1",
|
||||
"instance_id": "i-123",
|
||||
"system": "x86_64-linux",
|
||||
"updated_at": "now",
|
||||
},
|
||||
),
|
||||
):
|
||||
step = CapacityGateStep(
|
||||
name="gate",
|
||||
daemon_socket=socket_path,
|
||||
system_property="system",
|
||||
default_system="x86_64-linux",
|
||||
reserve_timeout_seconds=2,
|
||||
poll_interval_seconds=0.01,
|
||||
retry_max_attempts=2,
|
||||
retry_base_seconds=0.001,
|
||||
)
|
||||
build = FakeBuild()
|
||||
_attach_build(step, build)
|
||||
|
||||
result = _run_step(step)
|
||||
|
||||
assert result == util.SUCCESS
|
||||
assert build.getProperty("autoscale_reservation_id") == "r1"
|
||||
assert build.getProperty("autoscale_slot") == "slot-1"
|
||||
assert build.getProperty("autoscale_instance_id") == "i-123"
|
||||
assert isinstance(build.getProperty("autoscale_wait_seconds"), int)
|
||||
|
||||
|
||||
def test_gate_failure_on_failed_phase(socket_path: str) -> None:
|
||||
with FakeDaemon(
|
||||
socket_path,
|
||||
on_post=lambda _p, _payload, _n: (
|
||||
HTTPStatus.OK,
|
||||
{"reservation_id": "r1", "phase": "pending"},
|
||||
),
|
||||
on_get=lambda _p, _n: (
|
||||
HTTPStatus.OK,
|
||||
{"reservation_id": "r1", "phase": "failed", "reason": "no"},
|
||||
),
|
||||
):
|
||||
step = CapacityGateStep(
|
||||
name="gate",
|
||||
daemon_socket=socket_path,
|
||||
system_property="system",
|
||||
default_system="x86_64-linux",
|
||||
reserve_timeout_seconds=2,
|
||||
poll_interval_seconds=0.01,
|
||||
retry_max_attempts=2,
|
||||
retry_base_seconds=0.001,
|
||||
)
|
||||
build = FakeBuild()
|
||||
_attach_build(step, build)
|
||||
|
||||
result = _run_step(step)
|
||||
|
||||
assert result == util.FAILURE
|
||||
|
||||
|
||||
def test_gate_failure_on_timeout(socket_path: str) -> None:
|
||||
with FakeDaemon(
|
||||
socket_path,
|
||||
on_post=lambda _p, _payload, _n: (
|
||||
HTTPStatus.OK,
|
||||
{"reservation_id": "r1", "phase": "pending"},
|
||||
),
|
||||
on_get=lambda _p, _n: (HTTPStatus.OK, {"reservation_id": "r1", "phase": "pending"}),
|
||||
):
|
||||
step = CapacityGateStep(
|
||||
name="gate",
|
||||
daemon_socket=socket_path,
|
||||
system_property="system",
|
||||
default_system="x86_64-linux",
|
||||
reserve_timeout_seconds=0,
|
||||
poll_interval_seconds=0.01,
|
||||
retry_max_attempts=2,
|
||||
retry_base_seconds=0.001,
|
||||
)
|
||||
build = FakeBuild()
|
||||
_attach_build(step, build)
|
||||
|
||||
result = _run_step(step)
|
||||
|
||||
assert result == util.FAILURE
|
||||
|
||||
|
||||
def test_release_skipped_without_reservation(socket_path: str) -> None:
|
||||
step = CapacityReleaseStep(
|
||||
name="release",
|
||||
daemon_socket=socket_path,
|
||||
retry_max_attempts=2,
|
||||
retry_base_seconds=0.001,
|
||||
)
|
||||
build = FakeBuild()
|
||||
_attach_build(step, build)
|
||||
|
||||
result = _run_step(step)
|
||||
|
||||
assert result == util.SKIPPED
|
||||
|
||||
|
||||
def test_release_warnings_on_retry_exhausted_500(socket_path: str) -> None:
|
||||
with FakeDaemon(
|
||||
socket_path,
|
||||
on_post=lambda _p, _payload, _n: (HTTPStatus.INTERNAL_SERVER_ERROR, {"error": "boom"}),
|
||||
on_get=lambda _p, _n: (HTTPStatus.OK, {}),
|
||||
):
|
||||
step = CapacityReleaseStep(
|
||||
name="release",
|
||||
daemon_socket=socket_path,
|
||||
retry_max_attempts=2,
|
||||
retry_base_seconds=0.001,
|
||||
)
|
||||
build = FakeBuild()
|
||||
build.setProperty("autoscale_reservation_id", "r1", "test")
|
||||
_attach_build(step, build)
|
||||
|
||||
result = _run_step(step)
|
||||
|
||||
assert result == util.WARNINGS
|
||||
|
||||
|
||||
def test_release_success(socket_path: str) -> None:
|
||||
with FakeDaemon(
|
||||
socket_path,
|
||||
on_post=lambda _p, _payload, _n: (
|
||||
HTTPStatus.OK,
|
||||
{"reservation_id": "r1", "phase": "released"},
|
||||
),
|
||||
on_get=lambda _p, _n: (HTTPStatus.OK, {}),
|
||||
):
|
||||
step = CapacityReleaseStep(
|
||||
name="release",
|
||||
daemon_socket=socket_path,
|
||||
retry_max_attempts=2,
|
||||
retry_base_seconds=0.001,
|
||||
)
|
||||
build = FakeBuild()
|
||||
build.setProperty("autoscale_reservation_id", "r1", "test")
|
||||
_attach_build(step, build)
|
||||
|
||||
result = _run_step(step)
|
||||
|
||||
assert result == util.SUCCESS
|
||||
43
buildbot-ext/pyproject.toml
Normal file
43
buildbot-ext/pyproject.toml
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
[build-system]
|
||||
requires = ["setuptools>=68", "wheel"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "buildbot-autoscale-ext"
|
||||
version = "0.1.0"
|
||||
description = "Buildbot extension for nix-builder-autoscaler capacity gating"
|
||||
requires-python = ">=3.12"
|
||||
dependencies = [
|
||||
"buildbot",
|
||||
"twisted",
|
||||
]
|
||||
|
||||
[dependency-groups]
|
||||
dev = [
|
||||
"pytest",
|
||||
"ruff",
|
||||
"pyright",
|
||||
]
|
||||
|
||||
[tool.uv.extra-build-dependencies]
|
||||
py-ubjson = ["setuptools"]
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py312"
|
||||
line-length = 100
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "UP", "B", "SIM", "ANN"]
|
||||
ignore = []
|
||||
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"*/tests/*" = ["ANN"]
|
||||
|
||||
[tool.pyright]
|
||||
pythonVersion = "3.12"
|
||||
typeCheckingMode = "standard"
|
||||
include = ["buildbot_autoscale_ext"]
|
||||
exclude = ["**/tests"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["buildbot_autoscale_ext/tests"]
|
||||
1052
buildbot-ext/uv.lock
generated
Normal file
1052
buildbot-ext/uv.lock
generated
Normal file
File diff suppressed because it is too large
Load diff
95
flake.lock
generated
95
flake.lock
generated
|
|
@ -14,9 +14,102 @@
|
|||
"url": "https://flakehub.com/f/NixOS/nixpkgs/0.1"
|
||||
}
|
||||
},
|
||||
"pyproject-build-systems": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
],
|
||||
"pyproject-nix": [
|
||||
"pyproject-nix"
|
||||
],
|
||||
"uv2nix": [
|
||||
"uv2nix"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1771423342,
|
||||
"narHash": "sha256-7uXPiWB0YQ4HNaAqRvVndYL34FEp1ZTwVQHgZmyMtC8=",
|
||||
"owner": "pyproject-nix",
|
||||
"repo": "build-system-pkgs",
|
||||
"rev": "04e9c186e01f0830dad3739088070e4c551191a4",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "pyproject-nix",
|
||||
"repo": "build-system-pkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"pyproject-nix": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1771518446,
|
||||
"narHash": "sha256-nFJSfD89vWTu92KyuJWDoTQJuoDuddkJV3TlOl1cOic=",
|
||||
"owner": "pyproject-nix",
|
||||
"repo": "pyproject.nix",
|
||||
"rev": "eb204c6b3335698dec6c7fc1da0ebc3c6df05937",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "pyproject-nix",
|
||||
"repo": "pyproject.nix",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"nixpkgs": "nixpkgs"
|
||||
"nixpkgs": "nixpkgs",
|
||||
"pyproject-build-systems": "pyproject-build-systems",
|
||||
"pyproject-nix": "pyproject-nix",
|
||||
"treefmt-nix": "treefmt-nix",
|
||||
"uv2nix": "uv2nix"
|
||||
}
|
||||
},
|
||||
"treefmt-nix": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1770228511,
|
||||
"narHash": "sha256-wQ6NJSuFqAEmIg2VMnLdCnUc0b7vslUohqqGGD+Fyxk=",
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"rev": "337a4fe074be1042a35086f15481d763b8ddc0e7",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"uv2nix": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
],
|
||||
"pyproject-nix": [
|
||||
"pyproject-nix"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1771808991,
|
||||
"narHash": "sha256-boRfTlN1GfVupWPnhcKlSHJzs9/lJP9KltycPLoPRbA=",
|
||||
"owner": "pyproject-nix",
|
||||
"repo": "uv2nix",
|
||||
"rev": "44d9a110d65fc4caaf9349fa819e8daf9d90d074",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "pyproject-nix",
|
||||
"repo": "uv2nix",
|
||||
"type": "github"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
|
|||
321
flake.nix
321
flake.nix
|
|
@ -1,51 +1,304 @@
|
|||
{
|
||||
inputs.nixpkgs.url = "https://flakehub.com/f/NixOS/nixpkgs/0.1";
|
||||
description = "nix-builder-autoscaler - autoscaler daemon for Nix remote builders on EC2 Spot";
|
||||
|
||||
inputs = {
|
||||
nixpkgs.url = "https://flakehub.com/f/NixOS/nixpkgs/0.1";
|
||||
treefmt-nix = {
|
||||
url = "github:numtide/treefmt-nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
pyproject-nix = {
|
||||
url = "github:pyproject-nix/pyproject.nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
uv2nix = {
|
||||
url = "github:pyproject-nix/uv2nix";
|
||||
inputs.pyproject-nix.follows = "pyproject-nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
pyproject-build-systems = {
|
||||
url = "github:pyproject-nix/build-system-pkgs";
|
||||
inputs.pyproject-nix.follows = "pyproject-nix";
|
||||
inputs.uv2nix.follows = "uv2nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
};
|
||||
|
||||
outputs =
|
||||
{ self, nixpkgs }:
|
||||
{
|
||||
self,
|
||||
nixpkgs,
|
||||
treefmt-nix,
|
||||
pyproject-nix,
|
||||
uv2nix,
|
||||
pyproject-build-systems,
|
||||
...
|
||||
}:
|
||||
let
|
||||
systems = [
|
||||
"x86_64-linux"
|
||||
];
|
||||
systems = [ "x86_64-linux" ];
|
||||
forAllSystems = fn: nixpkgs.lib.genAttrs systems (system: fn nixpkgs.legacyPackages.${system});
|
||||
|
||||
agentWorkspace = uv2nix.lib.workspace.loadWorkspace { workspaceRoot = ./agent; };
|
||||
agentOverlay = agentWorkspace.mkPyprojectOverlay { sourcePreference = "wheel"; };
|
||||
|
||||
buildbotExtWorkspace = uv2nix.lib.workspace.loadWorkspace { workspaceRoot = ./buildbot-ext; };
|
||||
buildbotExtOverlay = buildbotExtWorkspace.mkPyprojectOverlay { sourcePreference = "wheel"; };
|
||||
pyprojectOverrides = final: prev: {
|
||||
py-ubjson = prev.py-ubjson.overrideAttrs (old: {
|
||||
nativeBuildInputs = (old.nativeBuildInputs or [ ]) ++ [ final.setuptools ];
|
||||
});
|
||||
};
|
||||
in
|
||||
{
|
||||
#packages = forAllSystems (pkgs: {
|
||||
# default = pkgs.callPackage ./package.nix { };
|
||||
#});
|
||||
formatter = forAllSystems (
|
||||
pkgs: (treefmt-nix.lib.evalModule pkgs ./treefmt.nix).config.build.wrapper
|
||||
);
|
||||
|
||||
checks = forAllSystems (pkgs: {
|
||||
# todo add tests
|
||||
devShell = self.devShells.${pkgs.stdenv.hostPlatform.system}.default;
|
||||
}
|
||||
# future nixos test
|
||||
# // pkgs.lib.optionalAttrs pkgs.stdenv.isLinux {
|
||||
# nixos-module = pkgs.testers.runNixOSTest (import ./nixos-test.nix self);
|
||||
# }
|
||||
packages = forAllSystems (
|
||||
pkgs:
|
||||
let
|
||||
agentPythonSet =
|
||||
(pkgs.callPackage pyproject-nix.build.packages {
|
||||
python = pkgs.python312;
|
||||
}).overrideScope
|
||||
(
|
||||
pkgs.lib.composeManyExtensions [
|
||||
pyproject-build-systems.overlays.default
|
||||
agentOverlay
|
||||
pyprojectOverrides
|
||||
]
|
||||
);
|
||||
buildbotExtPythonSet =
|
||||
(pkgs.callPackage pyproject-nix.build.packages {
|
||||
python = pkgs.python312;
|
||||
}).overrideScope
|
||||
(
|
||||
pkgs.lib.composeManyExtensions [
|
||||
pyproject-build-systems.overlays.default
|
||||
buildbotExtOverlay
|
||||
pyprojectOverrides
|
||||
]
|
||||
);
|
||||
venv = agentPythonSet.mkVirtualEnv "nix-builder-autoscaler-env" agentWorkspace.deps.default;
|
||||
buildbotExtVenv = buildbotExtPythonSet.mkVirtualEnv "buildbot-autoscale-ext-env" buildbotExtWorkspace.deps.default;
|
||||
in
|
||||
{
|
||||
nix-builder-autoscaler = venv;
|
||||
buildbot-autoscale-ext = buildbotExtVenv;
|
||||
default = venv;
|
||||
}
|
||||
);
|
||||
|
||||
apps = forAllSystems (
|
||||
pkgs:
|
||||
let
|
||||
venv = self.packages.${pkgs.stdenv.hostPlatform.system}.nix-builder-autoscaler;
|
||||
in
|
||||
{
|
||||
nix-builder-autoscaler = {
|
||||
type = "app";
|
||||
program = "${venv}/bin/python";
|
||||
meta.description = "Nix builder autoscaler daemon";
|
||||
};
|
||||
autoscalerctl = {
|
||||
type = "app";
|
||||
program = "${venv}/bin/autoscalerctl";
|
||||
meta.description = "Autoscaler CLI";
|
||||
};
|
||||
default = {
|
||||
type = "app";
|
||||
program = "${venv}/bin/autoscalerctl";
|
||||
meta.description = "Autoscaler CLI";
|
||||
};
|
||||
}
|
||||
);
|
||||
|
||||
checks = forAllSystems (
|
||||
pkgs:
|
||||
let
|
||||
agentPythonSet =
|
||||
(pkgs.callPackage pyproject-nix.build.packages {
|
||||
python = pkgs.python312;
|
||||
}).overrideScope
|
||||
(
|
||||
pkgs.lib.composeManyExtensions [
|
||||
pyproject-build-systems.overlays.default
|
||||
agentOverlay
|
||||
pyprojectOverrides
|
||||
]
|
||||
);
|
||||
buildbotExtPythonSet =
|
||||
(pkgs.callPackage pyproject-nix.build.packages {
|
||||
python = pkgs.python312;
|
||||
}).overrideScope
|
||||
(
|
||||
pkgs.lib.composeManyExtensions [
|
||||
pyproject-build-systems.overlays.default
|
||||
buildbotExtOverlay
|
||||
pyprojectOverrides
|
||||
]
|
||||
);
|
||||
testVenv = agentPythonSet.mkVirtualEnv "nix-builder-autoscaler-test-env" {
|
||||
nix-builder-autoscaler = [ "dev" ];
|
||||
};
|
||||
buildbotExtTestVenv = buildbotExtPythonSet.mkVirtualEnv "buildbot-autoscale-ext-test-env" {
|
||||
buildbot-autoscale-ext = [ "dev" ];
|
||||
};
|
||||
src = ./agent;
|
||||
buildbotExtSrc = ./buildbot-ext;
|
||||
in
|
||||
{
|
||||
devShell = self.devShells.${pkgs.stdenv.hostPlatform.system}.default;
|
||||
|
||||
nix-builder-autoscaler-unit-tests = pkgs.stdenv.mkDerivation {
|
||||
name = "nix-builder-autoscaler-unit-tests";
|
||||
inherit src;
|
||||
dontConfigure = true;
|
||||
dontBuild = true;
|
||||
nativeBuildInputs = [ testVenv ];
|
||||
checkPhase = ''
|
||||
runHook preCheck
|
||||
export HOME=$(mktemp -d)
|
||||
pytest nix_builder_autoscaler/tests/ --ignore=nix_builder_autoscaler/tests/integration/ -v
|
||||
runHook postCheck
|
||||
'';
|
||||
doCheck = true;
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
touch $out/passed
|
||||
'';
|
||||
};
|
||||
|
||||
nix-builder-autoscaler-integration-tests = pkgs.stdenv.mkDerivation {
|
||||
name = "nix-builder-autoscaler-integration-tests";
|
||||
inherit src;
|
||||
dontConfigure = true;
|
||||
dontBuild = true;
|
||||
nativeBuildInputs = [ testVenv ];
|
||||
checkPhase = ''
|
||||
runHook preCheck
|
||||
export HOME=$(mktemp -d)
|
||||
pytest nix_builder_autoscaler/tests/integration/ -v
|
||||
runHook postCheck
|
||||
'';
|
||||
doCheck = true;
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
touch $out/passed
|
||||
'';
|
||||
};
|
||||
|
||||
nix-builder-autoscaler-ruff = pkgs.stdenv.mkDerivation {
|
||||
name = "nix-builder-autoscaler-ruff";
|
||||
inherit src;
|
||||
dontConfigure = true;
|
||||
dontBuild = true;
|
||||
nativeBuildInputs = [ testVenv ];
|
||||
checkPhase = ''
|
||||
runHook preCheck
|
||||
ruff check nix_builder_autoscaler/
|
||||
ruff format --check nix_builder_autoscaler/
|
||||
runHook postCheck
|
||||
'';
|
||||
doCheck = true;
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
touch $out/passed
|
||||
'';
|
||||
};
|
||||
|
||||
nix-builder-autoscaler-pyright = pkgs.stdenv.mkDerivation {
|
||||
name = "nix-builder-autoscaler-pyright";
|
||||
inherit src;
|
||||
dontConfigure = true;
|
||||
dontBuild = true;
|
||||
nativeBuildInputs = [
|
||||
testVenv
|
||||
pkgs.nodejs
|
||||
];
|
||||
checkPhase = ''
|
||||
runHook preCheck
|
||||
export HOME=$(mktemp -d)
|
||||
pyright nix_builder_autoscaler/
|
||||
runHook postCheck
|
||||
'';
|
||||
doCheck = true;
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
touch $out/passed
|
||||
'';
|
||||
};
|
||||
|
||||
buildbot-autoscale-ext-tests = pkgs.stdenv.mkDerivation {
|
||||
name = "buildbot-autoscale-ext-tests";
|
||||
src = buildbotExtSrc;
|
||||
dontConfigure = true;
|
||||
dontBuild = true;
|
||||
nativeBuildInputs = [ buildbotExtTestVenv ];
|
||||
checkPhase = ''
|
||||
runHook preCheck
|
||||
export HOME=$(mktemp -d)
|
||||
pytest buildbot_autoscale_ext/tests/ -v
|
||||
runHook postCheck
|
||||
'';
|
||||
doCheck = true;
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
touch $out/passed
|
||||
'';
|
||||
};
|
||||
|
||||
buildbot-autoscale-ext-ruff = pkgs.stdenv.mkDerivation {
|
||||
name = "buildbot-autoscale-ext-ruff";
|
||||
src = buildbotExtSrc;
|
||||
dontConfigure = true;
|
||||
dontBuild = true;
|
||||
nativeBuildInputs = [ buildbotExtTestVenv ];
|
||||
checkPhase = ''
|
||||
runHook preCheck
|
||||
ruff check buildbot_autoscale_ext/
|
||||
ruff format --check buildbot_autoscale_ext/
|
||||
runHook postCheck
|
||||
'';
|
||||
doCheck = true;
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
touch $out/passed
|
||||
'';
|
||||
};
|
||||
|
||||
buildbot-autoscale-ext-pyright = pkgs.stdenv.mkDerivation {
|
||||
name = "buildbot-autoscale-ext-pyright";
|
||||
src = buildbotExtSrc;
|
||||
dontConfigure = true;
|
||||
dontBuild = true;
|
||||
nativeBuildInputs = [
|
||||
buildbotExtTestVenv
|
||||
pkgs.nodejs
|
||||
];
|
||||
checkPhase = ''
|
||||
runHook preCheck
|
||||
export HOME=$(mktemp -d)
|
||||
pyright buildbot_autoscale_ext/
|
||||
runHook postCheck
|
||||
'';
|
||||
doCheck = true;
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
touch $out/passed
|
||||
'';
|
||||
};
|
||||
}
|
||||
);
|
||||
|
||||
devShells = forAllSystems (pkgs: {
|
||||
default = pkgs.mkShell {
|
||||
packages = with pkgs; [
|
||||
# TODO populate devshell for the project
|
||||
uv
|
||||
ruff
|
||||
pyright
|
||||
];
|
||||
};
|
||||
});
|
||||
|
||||
# TODO export module
|
||||
# nixosModules = {
|
||||
# default =
|
||||
# {
|
||||
# config,
|
||||
# lib,
|
||||
# pkgs,
|
||||
# ...
|
||||
# }:
|
||||
# {
|
||||
# imports = [ ./nixos-module.nix ];
|
||||
# services.TODSERVICENAME.package =
|
||||
# lib.mkDefault
|
||||
# self.packages.${pkgs.stdenv.hostPlatform.system}.default;
|
||||
# };
|
||||
# };
|
||||
};
|
||||
}
|
||||
|
|
|
|||
7
treefmt.nix
Normal file
7
treefmt.nix
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
{ pkgs, ... }:
|
||||
{
|
||||
projectRootFile = "flake.nix";
|
||||
programs.nixfmt.enable = true;
|
||||
programs.ruff-check.enable = true;
|
||||
programs.ruff-format.enable = true;
|
||||
}
|
||||
Loading…
Add table
Add a link
Reference in a new issue