Bareon functional testing ci
-
Upload
max-lobur -
Category
Technology
-
view
263 -
download
0
Transcript of Bareon functional testing ci
Bareon functional testing / CI
Bareon-func-test project structure (formely fpa-func-framework)- Python API to start services on controller- Python API to upload custom stub images, firmwares, etc- Python API to control slave nodes- /etc/bareon-func-test.conf
- virsh creds- IPMI creds- degree of parallelism (number of slaves)
- slaves are pooled- optional DHCP, TFTP, PXE, HTTP params
Bareon project structure (formely fuel-agent)- code- unit tests
- run by tox in unit-tests env- functional tests
- run by tox in func-tests env- import bareon-func-framework- setUp, tearDown are written using API provided by bareon-func-test- default slave lab configuration (disk space, CPU’s, RAM) defined in base setUp.
- can be overridden for particular test, using bareon-func-test API.- functional tests run if /etc/bareon-func-test.conf present, otherwise skipped.
bareon-func-test lab
bareon-func-test lab
Controller (one, VM or BM):- initial node where bareon is fetched to run tests.- hosts DHCP, TFTP, PXE and fake image service.
- all workers share single set of services- spawns slaves (using python virsh bindings, or preconfigured BM nodes)- manages pool of slaves- executes tests using available number of slaves
- drives FPA in every test (ssh-ing to slave)- if too much code overlaps with Ironic itself, may be based on Ironic
Slave node (many, VM or BM or both):- depending of /etc/bareon-func-test.conf:
- can live inside controller (nested virt)- can live on the same level with controller (networks for not nested?)- can be a BM server
- booted via PXE - runs ramdisk with agent- runs tests (one by one, driven by controller)
Single test (ramdisk only)Inputs:
- ramdisk build- one slave node- provision.json json- a set of commands and params to execute on verify step
- lsblk- parted- etc
- expected output json- optional params (inherited from base test case if not specified):
- a custom image- a custom firmware- etc
Outputs:- ramdisk log- passed: True/False- in future:
- performance grade (basing on statistics)
Single test (involving reboot to tenant image)Inputs:
- ramdisk build- a special tenant image with callback and built-in key- one slave node- provision.json json- a set of commands and params to execute on ramdisk verify step- expected ramdisk output json- a set of commands and params to execute on tenant image verify step- expected tenant image output json- optional params (inherited from base test case if not specified)
Outputs:- ramdisk log- tenant image boot log- passed: True/False- in future:
- performance grade (basing on statistics)
Logs- both agent logs and tenant image logs are sent to controller, and published- logs are sent continuously (where possible) to be able to trace possible kernel panic, etc
Group of tests (Test class)- Share the same setup config:
- request specific image- request specific firmware- request specific node
- multiple disks- existing data to test preserve
Parallelism- Single controller node
- a shared PXE, TFTP, HTTP for all slaves- Multiple virtual slave nodes (basing on config)
- spawned on demand using Python virsh bindings- power management via virsh
- Multiple baremetal slave nodes (basing on config)- need to set IPMI creds in config- spawned via IPMI- power management via virsh- if we want to host a few labs in parallel (test a few ramdisks at time) need to split available
HW nodes between labs- Parallel test execution is done via testr (OpenStack standard testing tool)
- we configure processes=number_of_slaves
Triggers in CI
Q&A