I am trying to run a playbook that queries our F5 for a specific pool and displays only what I need. I can parse any data directly under ltm_pools but if it is a subset like "members" i can't filter specific info for that. I want to filter specific info for the members subset. You can see I have members listed as an output display but it displays every field under members. I want to filter for specific fields like "name" under the "members" field.
See the JSON OUtput:
"ltm_pools": [
{
"all_max_queue_entry_age_recently": 0,
"pool_queue_head_entry_age": 0,
"current_sessions": 0,
"pool_max_queue_entry_age_recently": 0,
"server_side_max_connections": 0,
"monitors": [
"/Common/tcp"
],
"available_member_count": 0,
"server_side_bits_in": 0,
"pool_max_queue_entry_age_ever": 0,
"member_count": 4,
"priority_group_activation": 0,
"allow_snat": "yes",
"reselect_tries": 0,
"enabled_status": "enabled",
"active_member_count": 0,
"server_ip_tos": "pass-through",
"server_link_qos": "pass-through",
"queue_on_connection_limit": "no",
"queue_depth_limit": 0,
"minimum_up_members_action": "failover",
"allow_nat": "yes",
"lb_method": "round-robin",
"all_avg_queue_entry_age": 0,
"ignore_persisted_weight": "no",
"server_side_bits_out": 0,
"all_num_connections_serviced": 0,
"minimum_active_members": 0,
"service_down_action": "none",
"server_side_current_connections": 0,
"members": [
{
"real_session": "monitor-enabled",
"rate_limit": "no",
"inherit_profile": "yes",
"real_state": "down",
"address": "1.1.1.1",
"logging": "no",
"monitors": [],
"ratio": 1,
"name": "1.1.1.1:3268",
"partition": "Common",
"ephemeral": "no",
"connection_limit": 0,
"state": "offline",
"full_path": "/Common/1.1.1.1:3268",
"fqdn_autopopulate": "no",
"priority_group": 0,
"dynamic_ratio": 1
},
{
"real_session": "monitor-enabled",
"rate_limit": "no",
"inherit_profile": "yes",
"real_state": "down",
"address": "2.2.2.2",
"logging": "no",
"monitors": [],
"ratio": 1,
"name": "2.2.2.2:3268",
"partition": "Common",
"ephemeral": "no",
"connection_limit": 0,
"state": "offline",
"full_path": "/Common/2.2.2.2:3268",
"fqdn_autopopulate": "no",
"priority_group": 0,
"dynamic_ratio": 1
},
{
"real_session": "monitor-enabled",
"rate_limit": "no",
"inherit_profile": "yes",
"real_state": "down",
"address": "3.3.3.3",
"logging": "no",
"monitors": [],
"ratio": 1,
"name": "3.3.3.3:3268",
"partition": "Common",
"ephemeral": "no",
"connection_limit": 0,
"state": "offline",
"full_path": "/Common/3.3.3.3:3268",
"fqdn_autopopulate": "no",
"priority_group": 0,
"dynamic_ratio": 1
},
{
"real_session": "monitor-enabled",
"rate_limit": "no",
"inherit_profile": "yes",
"real_state": "down",
"address": "3.3.3.3",
"logging": "no",
"monitors": [],
"ratio": 1,
"name": "3.3.3.3:3268",
"partition": "Common",
"ephemeral": "no",
"connection_limit": 0,
"state": "offline",
"full_path": "/Common/3.3.3.3:3268",
"fqdn_autopopulate": "no",
"priority_group": 0,
"dynamic_ratio": 1
}
],
"all_max_queue_entry_age_ever": 0,
"queue_time_limit": 0,
"status_reason": "The children pool member(s) are down",
"server_side_pkts_out": 0,
"server_side_total_connections": 0,
"pool_num_connections_serviced": 0,
"client_link_qos": "pass-through",
"name": "test",
"pool_num_connections_queued_now": 0,
"minimum_up_members": 0,
"all_queue_head_entry_age": 0,
"slow_ramp_time": 10,
"server_side_pkts_in": 0,
"client_ip_tos": "pass-through",
"minimum_up_members_checking": "no",
"all_num_connections_queued_now": 0,
"availability_status": "offline",
"pool_avg_queue_entry_age": 0,
"total_requests": 0,
"full_path": "/Common/test"
},
This is my playbook:
- hosts: localhost
tasks:
- name: collect device info
bigip_device_info:
gather_subset:
- ltm-pools
delegate_to: localhost
register: f5pools
# Show the name, IP address, port and status of a virtual server
- name: Display Config for a specific Virtual Server using a variable
debug:
var: item
loop: "{{ f5pools | json_query(pool_name) }}"
vars:
pool_name: "ltm_pools[?name=='{{ poolName }}'].{name: name, Method: lb_method, Members: members Monitors: monitors}"
You can nest a JMESPath query inside a multiselect, in your case, to achieve what you are looking for.
So instead of using plain
Members: members
In your multiselect, you can use something like
Members: members[*].{name: name}
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- name: Display Config for a specific Virtual Server using a variable
debug:
var: item
loop: "{{ f5pools | json_query(pool_name) }}"
loop_control:
label: "{{ item.name }}"
vars:
pool_name: "ltm_pools[?name=='{{ poolName }}'].{name: name, Method: lb_method, Members: members[*].{name: name}, Monitors: monitors}"
f5pools:
ltm_pools:
- all_max_queue_entry_age_recently: 0
pool_queue_head_entry_age: 0
current_sessions: 0
pool_max_queue_entry_age_recently: 0
server_side_max_connections: 0
monitors:
- /Common/tcp
available_member_count: 0
server_side_bits_in: 0
pool_max_queue_entry_age_ever: 0
member_count: 4
priority_group_activation: 0
allow_snat: yes
reselect_tries: 0
enabled_status: enabled
active_member_count: 0
server_ip_tos: pass-through
server_link_qos: pass-through
queue_on_connection_limit: no
queue_depth_limit: 0
minimum_up_members_action: failover
allow_nat: yes
lb_method: round-robin
all_avg_queue_entry_age: 0
ignore_persisted_weight: no
server_side_bits_out: 0
all_num_connections_serviced: 0
minimum_active_members: 0
service_down_action: none
server_side_current_connections: 0
members:
- real_session: monitor-enabled
rate_limit: no
inherit_profile: yes
real_state: down
address: 1.1.1.1
logging: no
monitors: []
ratio: 1
name: 1.1.1.1:3268
partition: Common
ephemeral: no
connection_limit: 0
state: offline
full_path: /Common/1.1.1.1:3268
fqdn_autopopulate: no
priority_group: 0
dynamic_ratio: 1
- real_session: monitor-enabled
rate_limit: no
inherit_profile: yes
real_state: down
address: 2.2.2.2
logging: no
monitors: []
ratio: 1
name: 2.2.2.2:3268
partition: Common
ephemeral: no
connection_limit: 0
state: offline
full_path: /Common/2.2.2.2:3268
fqdn_autopopulate: no
priority_group: 0
dynamic_ratio: 1
- real_session: monitor-enabled
rate_limit: no
inherit_profile: yes
real_state: down
address: 3.3.3.3
logging: no
monitors: []
ratio: 1
name: 3.3.3.3:3268
partition: Common
ephemeral: no
connection_limit: 0
state: offline
full_path: /Common/3.3.3.3:3268
fqdn_autopopulate: no
priority_group: 0
dynamic_ratio: 1
- real_session: monitor-enabled
rate_limit: no
inherit_profile: yes
real_state: down
address: 3.3.3.3
logging: no
monitors: []
ratio: 1
name: 3.3.3.3:3268
partition: Common
ephemeral: no
connection_limit: 0
state: offline
full_path: /Common/3.3.3.3:3268
fqdn_autopopulate: no
priority_group: 0
dynamic_ratio: 1
all_max_queue_entry_age_ever: 0
queue_time_limit: 0
status_reason: The children pool member(s) are down
server_side_pkts_out: 0
server_side_total_connections: 0
pool_num_connections_serviced: 0
client_link_qos: pass-through
name: test
pool_num_connections_queued_now: 0
minimum_up_members: 0
all_queue_head_entry_age: 0
slow_ramp_time: 10
server_side_pkts_in: 0
client_ip_tos: pass-through
minimum_up_members_checking: no
all_num_connections_queued_now: 0
availability_status: offline
pool_avg_queue_entry_age: 0
total_requests: 0
full_path: /Common/test
poolName: test
This yields the recap:
PLAY [all] ********************************************************************************************************
TASK [Display Config for a specific Virtual Server using a variable] **********************************************
ok: [localhost] => (item=test) => {
"ansible_loop_var": "item",
"item": {
"Members": [
{
"name": "1.1.1.1:3268"
},
{
"name": "2.2.2.2:3268"
},
{
"name": "3.3.3.3:3268"
},
{
"name": "3.3.3.3:3268"
}
],
"Method": "round-robin",
"Monitors": [
"/Common/tcp"
],
"name": "test"
}
}
PLAY RECAP ********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0