We do have to jump through a few hoops to make this work. First and
foremost during validation the model should have a "cleansed" view
of its data which means we add the subscription as a separate field
and append it to the mirror after validation.
It might be good to straighten this out later, also in the get path
so that we can hide all required translation in the controller until
we can move this to a standard GUI component and straighten out the
mirror read on the other end when subscriptions are required (but
currently no appended).
Shield the logic from seeping over into firewall code and moves
system_default_route() into system code.
Small overhead here calling up information again but we want to
verify the interface address beforehand and perhaps finally move
the default gateway switching to the right spot that is perhaps
system_routing_configure()?
While here restucture the kill/start sequence a little and let
the service log prints catch the real work being done so we
know which function is currently executing (waiting for process
kill for example).
I'm sure @maurice-w will rejoyce.
cache size and ttl support zero value, which was ignored by the input.
Derive help text from the manual page: https://thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html
Avoid validation during config write to not mask issues with the code.
Do not delete system_resolvconf_generate/system_hosts_generate yet.
We may just end up renaming them in order to get external callers
to adapt to the new layout.
o extend model with authgroup type (currently only for OpenVPN)
o add controller action to list user groups
o modify alias form to show group list in a similar way as network groups, simplify some of the code to prevent copying.
o add AuthGroup parser to glue the output of list_group_members.php and ovpn_status.py to a set of addresses per group for our new authgroup alias type to use
o hook 'learn-address' event in openvpn to trigger an alias update
Although theoretically we could pass addresses and common_names from learn-address further in our pipeline, for now we choose to use a common approach which should always offer the correct dataset (also after changing aliases and re-applying them). If for some reason this isn't fast enough, there are always options available to improve the situation, but usually at a cost in terms of complexity.