Looking to use pylibjuju to deploy an application hulk-smashed to existing machines per application


I am working on a testing suite using libjuju in order to mimic a hardware MAAS infrastructure layer and then landing a charm on those maas hosts using juju.model.Model().deploy() with the “to” kwarg.

In my environment I have the following units:

maas-bionic/0*    active    idle       5              ready
maas-bionic/1     active    idle       6               ready
maas-bionic/2     active    idle       7              ready

I’d like a way to programmatically query the Model/Application/Unit status by filtering application == "maas-bionic" and have returned, somehow, the placement directive list [5, 6, 7] to pass to
model.deploy(application="maas-monitor-bionic", charm="mylocalcharm", num_units=3, to=[5, 6, 7])

I see that the model.get_state() returns a FullStatus object that includes applications, but those appear to be strings, rather than pointers to ApplicationStatus objects which would then have ApplicationStatus.units which would be UnitStatus objects.

I feel like I’m missing something simple in the status tree that would help me to easily identify these machine numbers that this application is deployed to in the current model. Can anyone help?

Thank you,


Hi Drew, is this kind of what you were after?

# some boilerplate
>>> import asyncio
>>> import juju.loop
>>> from juju.model import Model
>>> loop = asyncio.new_event_loop()
>>> asyncio.set_event_loop(loop)
>>> model = Model()
>>> await model.connect_current()
>>> maas_monitor = model.applications["maas-monitor-bionic"]
>>> [(unit.agent_status, unit.workload_s
tatus) for unit in etcd.units]


I think that’ll definitely get me close enough to be able to pdb my way into the vars I’m looking for. Thank you so much!


Ultimately, this is the working outcome:

await model.connect_current()
app = model.applications["smooshed-application"]
placement = [unit.machine.id for unit in app.units]

juju.deploy( ... , to=placement, num_units=len(placement))

Thank you so much for your help, Tim!