client_connect.php also includes it but not config.inc. Try to leave
it at that to not pollute it unnecessarily. The other scripts might
be able to do it as well this way but don't fix something that is not
broken either.
Added if then to determine if the GUI-provided server is part of the public NTP pool or not. If the hostname ends in 'pool.ntp.org', it will write the entry to ntpd.conf with 'pool' instead of 'server' for that network server. If not then it will write it as 'server'. The pool directive tells ntpd to treat it differently. a 'server' host is only looked up at service startup whereas a 'pool' host is monitored and changed if it becomes unresponsive or is determined to be a falseticker among other things. ntpd will also pull several DNS entries for each pool entry so I have a followup change to allow configuration of this setting in the GUI, known as 'maxclock'. It sets how many servers to maintain with a default of 10.
This adds support in the GUI for the maxclock system setting. It is used to tell NTPd how many associations (time servers) to maintain. The default is 10 however an odd number is suggested by ntpd docs to make falseticker detection simpler. This change writes what is in the GUI to ntpd.conf.
With the use of the pool directive, ntpd will use more servers than what is listed on the general page. This setting allows the user to set the max number of associations (time servers) to be maintained. Ntpd will use multiple entries from each pool entry that it maintains. Default is 10 but ntpd docs say to use an odd number to make throwing out falsetickers easier. The used is calculated wierdly from the max with the pool entries. For example with a setting of 10 and using the four default X.opnsense.pool.ntp.org entries it will have 6 associations it maintains instead of the 4 listed in the GUI. I went into more detail in the issue itself.
You can use for example, only 'us.pool.ntp.org' and it will maintain 9 associations from this pool. This means the default install configuration could just be '0.opnsense.pool.ntp.org' or, if possible, setup a 'opnsense.pool.ntp.org' so perhaps some documentation changes are in order as well?
I duplicated how the orphan setting is addressed however I did not know how these settings are maintained in a configuration backup so someone smarter may need to address that if required?
Migrate ui to MVC, wrap model around existing configuration area to remain backward compatibility.
To avoid laggs configured via console not being reachable from the gui, add a uuid to it.
PHP Deprecated: Creation of dynamic property OPNsense\Core\Api\MenuController::$request is deprecated in /usr/local/opnsense/mvc/app/controllers/OPNsense/Base/ApiControllerBase.php on line 195
PHP Deprecated: Creation of dynamic property OPNsense\Core\Api\MenuController::$session is deprecated in /usr/local/opnsense/mvc/app/controllers/OPNsense/Base/ControllerRoot.php on line 149
PHP Deprecated: Creation of dynamic property OPNsense\Core\Api\MenuController::$security is deprecated in /usr/local/opnsense/mvc/app/controllers/OPNsense/Base/ApiControllerBase.php on line 298
o make sure DbConnection() throws a new StorageVersionException when storage versions mismatch
o add restore_database() function to overwrite an existing database with the contents of an earlier backup made by the pre-upgrade hook
o the logger is responsible for the database, on startup, it should validate the version and initialise a restore when there's a mismatch
In case the storage version doesn't match, there are 3 options, the backup is locked (restore running), in which case we exit, the restore went fine and we can start the logger, or there is no backup available, in which case we have no other choice then to drop the unsupported database.
While here, also fix a small issue in stats.py leading to NaN values being returned due to https://github.com/duckdb/duckdb/issues/4066
There seem to be two issues:
1. Tentative addresses could have always been ignored for the wrong reasons
and we can savely move the delay to this script even though a small delay
will be the result (2 seconds with the default sysctl). Not sure why this
problem previously not mattered that much, but at least we can move the other
instance of the delay to here and avoid duplication since it will continue
to load this script anyway.
2. Due to overlaps and technical convolution these scripts can be run multile
times in a very short succession especially on a bootup. Since we have a delay
here now we force a lock prior to "catch" stray invocations. The only issue
I see is that we could lose the "force" flag in the process, but if that is
the case the log message will reveal and we can work around this sas well with
a two stage log perhaps.
The logger is responsible for database maintanance, when the storage version doesn't match on startup it should import the previous content from this directory so we are able to survive duckdb version upgrades.
For more information, see https://duckdb.org/internals/storage
Although the code is still a bit convoluted due to the dropdown being used for multiple purposes, it should make sense to always show the option to add a new one if none can be found and only show the related rule when it can.
To allow legacy services without a model to hook into the `ApiMutableServiceController`, we define a protected `serviceEnabled` function that by default checks the given `internalServiceEnabled` property to see if a service is enabled, but allows derived classes to override the functionality. We loosen the property restrictions in `initialize()` by moving the checks to their runtime implementations.
DHCPv4/v6 is modified here to hook into this change, but since the `actions_services` requires the keyword `service`, which isn't used by the mutable service controller, we define start/stop/restart/status actions in the `actions_dhcpd.conf` and the new `actions_dhcpd6.conf` files.
* dhcp6: add backend for listing dhcpv6 leases
* dhcp6: add leases view and controller
* dhcp6: lease deletion backend
* dhcp6: move to separate dhcpv6 directory to accomodate the service control UI
The process simply fires off N requests, with each request restarting the dhcp server. Aggregating the addresses is likely not worth the effort, so just drop the feature.