heartbeats:
I've seen you implemented that clients have to send a message every two
minutes. A topic like the above .../<clientid>/active which is set to 1
when a client connects and to 0 by its last will would have the same
effect but not require heartbeats to be sent every two minutes and thus
use less of our precious cellular data.

The problem is if the client does not set the last will correctly, or server/library doesn’t handle it. We’d get stuck thinking a client is connected and use 10 times the data forever.

There is also the related issue of a locked-up client (not disconnected). The heartbeat lets us know that the client is both connected and working.

Compared to the amount of data coming from the module to client, one single message once a minute client->module, and only when the client App is actually running and connected, is minimal.

Any other way to handle the misbehaving client issue?

To provide another example for this. I’m using the mqtt.fx desktop client. That can setup a last will just fine, but if you press the disconnect button then the last will is not sent to the ovms module. Force quitting the application does correctly send the last will.

It seems that the mqtt specification says that if the client _cleanly_ disconnects (with the DISCONNECT mqtt message) then the last will should not be published. Urgh.

So, our full spec for this should be:

  1. Client should publish "<prefix>/client/<clientid>/active = 1”:
    1. When it first connects
    2. Every subsequent 60 seconds
    3. When it receives a "<prefix>/metric/s/v3/connected = yes"

  2. Client should publish "<prefix>/client/<clientid>/active = 0”:
    1. As it’s MQTT last-will-and-testament
    2. Prior to a controlled disconnection

QOS 0, not retained, is fine for these.

Regards, Mark.

P.S. I’ve made the other changes, for topic hierarchy. It seems ok; committed and pushed.

On 6 Jul 2018, at 11:15 AM, Mark Webb-Johnson <mark@webb-johnson.net> wrote:

Jakob,

I suppose the following topic names:
metrics:       <prefix>/metric/#
events:        <prefix>/event/#
notifications: <prefix>/notify/#
config:        <prefix>/config/#
logs:          <prefix>/log/<tag>
active:        <prefix>/client/<clientid>/active
requests:      <prefix>/client/<clientid>/request/#
commands:      <prefix>/client/<clientid>/command/<command id>
cmd responses: <prefix>/client/<clientid>/response/<command id>

All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.

Filtering:
In order to decrease used data a blacklist of metrics which are not
send (like m.freeram or m.monotonic) would be cool.

Yep. Makes sense.

heartbeats:
I've seen you implemented that clients have to send a message every two
minutes. A topic like the above .../<clientid>/active which is set to 1
when a client connects and to 0 by its last will would have the same
effect but not require heartbeats to be sent every two minutes and thus
use less of our precious cellular data.

The problem is if the client does not set the last will correctly, or server/library doesn’t handle it. We’d get stuck thinking a client is connected and use 10 times the data forever.

There is also the related issue of a locked-up client (not disconnected). The heartbeat lets us know that the client is both connected and working.

Compared to the amount of data coming from the module to client, one single message once a minute client->module, and only when the client App is actually running and connected, is minimal.

Any other way to handle the misbehaving client issue?

Requesting values:
In order to decrease data usage even more allowing clients to request
the value of a metric using <prefix>/client/<clientid>/request/metric
and the metric name in the value or using
<prefix>/client/<clientid>/request/config and the config name in the
value. Allowing wildcards like m.net.* or just * allows to request
multiple or all metrics with a single request. Then e.g. the app would
request all metrics it wants to display.

ok. Let’s deal with that later. Rather than the traditional broadcast mechanism, there is also the opportunity to have a request-broadcast system where the client tells the module what metrics it is interested in. I am just not sure how that fits into our framework.

Encryption:
TLS in general seems to be a missing thing with OVMS. Downloading
firmware updates over http and not doing any verification is kind of
meh. The problem is (as noted in my last email) with the CA certs. My
suggestion is to ship with one default CA (e.g. Let's Encrypt) and
allow the user to replace it with a different one.
I would love to implement this, but I am currently in exam phase and
won't have much time to spare until next week.

I think we can have a list of trusted CAs. Some can be provided as standard in the firmware (for example, api.openvehicles.com currently uses '/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root’ so we should have that in there). Others can be provided in the config (or /store file system). We build up our list, to be used by TLS library, by combining the two sources.

Authentification:
If you can live with moving to mosquitto[0] (MQTT broker by eclipse)
they have a very good auth plugin[1]. All you have to do is to write an
sql statement which receives the hashed passwords from the OVMS
database by username (see [2]). If you are on debian its just an apt
install mosquitto mosquitto-auth-plugin away.
It is very important to set up ACL (access control lists) which make
sure only user X can subscribe/publish to ovms/X and noone else.
Luckily this is also handeled by mosquitto_auth_plug

[0] https://mosquitto.org/
[1] https://github.com/jpmens/mosquitto-auth-plug
[2] https://github.com/jpmens/mosquitto-auth-plug#postgresql

I am currently using mosquitto, but honestly think it severely lacking with regard to plugins (and authentication specifically). While jpmens has written a neat library, there is still no simple way to just check a user+password for authentication. The closest is the HTTP API, but that seems kludgy. I also don’t understand why this is not packaged with mosquitto by default - on centos we have to do a nasty custom build-from-source-code.

The problem with jpmens, and why it won’t work for us, is:

The SQL query for looking up a user's password hash is mandatory. The query MUST return a single row only (any other number of rows is considered to be "user not found"), and it MUST return a single column only with the PBKDF2 password hash.

Drupal passwords, while stored in a mysql database, are not in PBKDF2 format. The jpmens approach is fine if you want to have a custom database table just for this application, but if you are trying to authenticate against some other application using a different password hash algorithm.

By comparison, vernemq includes all these authentication plugins as standard code as part of the main distribution. It also includes a simple system to write hook scripts (including authentication if you want) in Lua (or the base erlang if you are desperate).

I’ve been struggling so hard with mosquitto for this, that I’m about ready to give up and just extend the drupal openvehicles extension to allow a username+password to be checked via API. But it is way too much work and way too kludgy for something that really should be simple and commonplace.

Regards, Mark.

On 5 Jul 2018, at 5:41 PM, Jakob Löw <ovms@m4gnus.de> wrote:

Hey,

Regarding naming:
I suppose the following topic names:
metrics:       <prefix>/metric/#
events:        <prefix>/event/#
notifications: <prefix>/notify/#
config:        <prefix>/config/#
logs:          <prefix>/log/<tag>
active:        <prefix>/client/<clientid>/active
requests:      <prefix>/client/<clientid>/request/#
commands:      <prefix>/client/<clientid>/command/<command id>
cmd responses: <prefix>/client/<clientid>/response/<command id>

Filtering:
In order to decrease used data a blacklist of metrics which are not
send (like m.freeram or m.monotonic) would be cool.

heartbeats:
I've seen you implemented that clients have to send a message every two
minutes. A topic like the above .../<clientid>/active which is set to 1
when a client connects and to 0 by its last will would have the same
effect but not require heartbeats to be sent every two minutes and thus
use less of our precious cellular data.

Requesting values:
In order to decrease data usage even more allowing clients to request
the value of a metric using <prefix>/client/<clientid>/request/metric
and the metric name in the value or using
<prefix>/client/<clientid>/request/config and the config name in the
value. Allowing wildcards like m.net.* or just * allows to request
multiple or all metrics with a single request. Then e.g. the app would
request all metrics it wants to display.

Encryption:
TLS in general seems to be a missing thing with OVMS. Downloading
firmware updates over http and not doing any verification is kind of
meh. The problem is (as noted in my last email) with the CA certs. My
suggestion is to ship with one default CA (e.g. Let's Encrypt) and
allow the user to replace it with a different one.
I would love to implement this, but I am currently in exam phase and
won't have much time to spare until next week.

Authentification:
If you can live with moving to mosquitto[0] (MQTT broker by eclipse)
they have a very good auth plugin[1]. All you have to do is to write an
sql statement which receives the hashed passwords from the OVMS
database by username (see [2]). If you are on debian its just an apt
install mosquitto mosquitto-auth-plugin away.
It is very important to set up ACL (access control lists) which make
sure only user X can subscribe/publish to ovms/X and noone else.
Luckily this is also handeled by mosquitto_auth_plug

[0] https://mosquitto.org/
[1] https://github.com/jpmens/mosquitto-auth-plug
[2] https://github.com/jpmens/mosquitto-auth-plug#postgresql

On Thu, 2018-07-05 at 09:26 +0800, Mark Webb-Johnson wrote:
I am far from an expert in MQTT. Not even a novice. So, the work
below is ‘best efforts’. Any help / comments / suggestions would be
most appreciated. In particular, a big thanks to Jakob for his
contributions to this so far.

With yesterday’s merge, and commits, we have a very very basic OVMS
server v3 implementation. It sends the metrics, and doesn’t crash
(much). The overall design is as follows:

We use the mongoose mqtt library. We don’t do anything special, and
everything is following the mqtt 3.1 standard.

MQTT has the concept of topics. Our default prefix for everything is:

ovms/<mqtt-username>/<vehicleid>/

(that can be customised with config server.v3 topic.prefix)

Metrics are sent as topics. The metric name is added to the topic
prefix + “m/“ suffix, and “.” characters converted to “/“ to match
the MQTT conventions. The value is simply the metric value (as a
string). With our current arrangement, this adds m/m/, m/s/, and m/v/
sub-trees to the the MQTT topic hierarchy. Clients can subscribe to
<prefix>/m/# to receive all metrics. The ‘retained’ flag is set on
these metrics, at QOS level 0 (so subscribing clients will get a copy
of the last known values for these metrics, even with a disconnected
vehicle).

The metric s.v3.connected is maintained by the ServerV3 code. When a
successful MQTT connection is made, and login successful, that is set
true (yes). A MQTT last-will-testament is set so that if the OVMS
module disconnects the server will automatically update that to false
(no). The ‘retained’ flag is set on this metric, at QOS level 0 (so
subscribing clients will get a copy of the last known state). Clients
can use this to see if the vehicle is connected to the server.

Connecting clients are expected to write a “1” value to the
<prefix>/c/<clientid> topic, and to repeat that write once a minute.
They are also expected to use a last-will-testament on that same
topic with value “0”. QOS 1 should be used, and these topics should
not be retained. The server V3 code subscribes to this <prefix>/c/#
topic, so it gets informed of all connected clients. It can then
update the s.v3.peers metric appropriately. Clients are expected to
monitor the <prefix>/s/v3/connected topic, so that if it becomes
‘yes’ (true) the client should send <prefix>/c/<clientid> “1”
immediately. This mechanism allows a newly connected vehicle to
immediately know if one or more clients is connected.

The Server v3 sets a timeout for <prefix>/c/<clientid> connections of
2 minutes. If that mqtt topic is not sent again within that time, it
is expired (and that client is treated as disconnected).

Similar to the v2 code, the server v3 transmits modified metrics once
a minute if there are one or more clients connected, or once every
ten minutes if there are no clients connected.

All the above has been implemented. To reach parity with v2, and
exceed it’s functionality in places, we have a few more things to do:

Notifications

On the vehicle side, I am proposing to use the <prefix>/notify/<type>
namespace for these, using QOS 2 messages without retention. Clients
can listen to those, if necessary. We can also have a central daemon
running that listens to ovms/+/+/n/# topic pattern to receive these
notifications and handle appropriately. Using QOS 2 we can confirm
the reception of the notification / historical data, and mark it as
delivered, appropriately. However, that would only confirm delivery
to the MQTT server, not to the central daemon; if the daemon was not
running, the notification would be lost.

Textual Commands

I am proposing to use the <prefix>/cmd/<clientid>/c/ and
<prefix>/cmd/<clientid>/r/ namespaces for this, using QOS 2 messages
without retention. The value in the /c/ would be the command, and the
response would be written to matching the /r/ topic. The commands and
responses could be prefixed by an identifier (like in imap protocol)
so responses can be matched to commands by the clients). The client
side can simply subscribe to itself, and the vehicle side subscribes
to <prefix>/cmd/#. In this way, commands cannot be duplicated, and
clients don’t see responses to commands they didn’t initiate (which
was an issue with v2).

Numeric Commands

I am not sure if we need to implement the numeric commands, as used
in the v2 protocol. It seems to me that we can use textual commands.

Config Access

I am not sure if we need this, beyond the command processor. If we
do, we could expose a /config/ namespace.

Events

It would be nice to expose events (except for the ticker.* ones, of
course). This could be done by a <prefix>/events topic, using QOS 2
and no retention.

Logs

It would be nice to expose logs. This could be done by a
<prefix>/logs topic, using QOS 1 and no retention.

Security

We need to add SSL support. I am trying to get an authentication
plugin for mosquitto / vernemq written so that we can authenticate
straight from the OVMS database that is already running on the
servers, and give each user their own ovms/<userid>/# namespace. That
way, the configuration for v3 on the vehicle/apps/clients is simple -
just put in the server username and password (no separate vehicle
passwords necessary).

I think that is it. The above would form the basis of the
specification for this. As this is the basis for future work and
direction in OVMS, it is important that we get it right, so all
comments / suggestions most welcome.

Regards, Mark.

_______________________________________________
OvmsDev mailing list
OvmsDev@lists.openvehicles.com
http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________
OvmsDev mailing list
OvmsDev@lists.openvehicles.com
http://lists.openvehicles.com/mailman/listinfo/ovmsdev

_______________________________________________
OvmsDev mailing list
OvmsDev@lists.openvehicles.com
http://lists.openvehicles.com/mailman/listinfo/ovmsdev