I am far from an expert in MQTT. Not even a novice. So, the work below is ‘best efforts’. Any help / comments / suggestions would be most appreciated. In particular, a big thanks to Jakob for his contributions to this so far. With yesterday’s merge, and commits, we have a very very basic OVMS server v3 implementation. It sends the metrics, and doesn’t crash (much). The overall design is as follows: We use the mongoose mqtt library. We don’t do anything special, and everything is following the mqtt 3.1 standard. MQTT has the concept of topics. Our default prefix for everything is: ovms/<mqtt-username>/<vehicleid>/ (that can be customised with config server.v3 topic.prefix) Metrics are sent as topics. The metric name is added to the topic prefix + “m/“ suffix, and “.” characters converted to “/“ to match the MQTT conventions. The value is simply the metric value (as a string). With our current arrangement, this adds m/m/, m/s/, and m/v/ sub-trees to the the MQTT topic hierarchy. Clients can subscribe to <prefix>/m/# to receive all metrics. The ‘retained’ flag is set on these metrics, at QOS level 0 (so subscribing clients will get a copy of the last known values for these metrics, even with a disconnected vehicle). The metric s.v3.connected is maintained by the ServerV3 code. When a successful MQTT connection is made, and login successful, that is set true (yes). A MQTT last-will-testament is set so that if the OVMS module disconnects the server will automatically update that to false (no). The ‘retained’ flag is set on this metric, at QOS level 0 (so subscribing clients will get a copy of the last known state). Clients can use this to see if the vehicle is connected to the server. Connecting clients are expected to write a “1” value to the <prefix>/c/<clientid> topic, and to repeat that write once a minute. They are also expected to use a last-will-testament on that same topic with value “0”. QOS 1 should be used, and these topics should not be retained. The server V3 code subscribes to this <prefix>/c/# topic, so it gets informed of all connected clients. It can then update the s.v3.peers metric appropriately. Clients are expected to monitor the <prefix>/s/v3/connected topic, so that if it becomes ‘yes’ (true) the client should send <prefix>/c/<clientid> “1” immediately. This mechanism allows a newly connected vehicle to immediately know if one or more clients is connected. The Server v3 sets a timeout for <prefix>/c/<clientid> connections of 2 minutes. If that mqtt topic is not sent again within that time, it is expired (and that client is treated as disconnected). Similar to the v2 code, the server v3 transmits modified metrics once a minute if there are one or more clients connected, or once every ten minutes if there are no clients connected. All the above has been implemented. To reach parity with v2, and exceed it’s functionality in places, we have a few more things to do: Notifications On the vehicle side, I am proposing to use the <prefix>/notify/<type> namespace for these, using QOS 2 messages without retention. Clients can listen to those, if necessary. We can also have a central daemon running that listens to ovms/+/+/n/# topic pattern to receive these notifications and handle appropriately. Using QOS 2 we can confirm the reception of the notification / historical data, and mark it as delivered, appropriately. However, that would only confirm delivery to the MQTT server, not to the central daemon; if the daemon was not running, the notification would be lost. Textual Commands I am proposing to use the <prefix>/cmd/<clientid>/c/ and <prefix>/cmd/<clientid>/r/ namespaces for this, using QOS 2 messages without retention. The value in the /c/ would be the command, and the response would be written to matching the /r/ topic. The commands and responses could be prefixed by an identifier (like in imap protocol) so responses can be matched to commands by the clients). The client side can simply subscribe to itself, and the vehicle side subscribes to <prefix>/cmd/#. In this way, commands cannot be duplicated, and clients don’t see responses to commands they didn’t initiate (which was an issue with v2). Numeric Commands I am not sure if we need to implement the numeric commands, as used in the v2 protocol. It seems to me that we can use textual commands. Config Access I am not sure if we need this, beyond the command processor. If we do, we could expose a /config/ namespace. Events It would be nice to expose events (except for the ticker.* ones, of course). This could be done by a <prefix>/events topic, using QOS 2 and no retention. Logs It would be nice to expose logs. This could be done by a <prefix>/logs topic, using QOS 1 and no retention. Security We need to add SSL support. I am trying to get an authentication plugin for mosquitto / vernemq written so that we can authenticate straight from the OVMS database that is already running on the servers, and give each user their own ovms/<userid>/# namespace. That way, the configuration for v3 on the vehicle/apps/clients is simple - just put in the server username and password (no separate vehicle passwords necessary). I think that is it. The above would form the basis of the specification for this. As this is the basis for future work and direction in OVMS, it is important that we get it right, so all comments / suggestions most welcome. Regards, Mark.
Hey, Regarding naming: I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id> Filtering: In order to decrease used data a blacklist of metrics which are not send (like m.freeram or m.monotonic) would be cool. heartbeats: I've seen you implemented that clients have to send a message every two minutes. A topic like the above .../<clientid>/active which is set to 1 when a client connects and to 0 by its last will would have the same effect but not require heartbeats to be sent every two minutes and thus use less of our precious cellular data. Requesting values: In order to decrease data usage even more allowing clients to request the value of a metric using <prefix>/client/<clientid>/request/metric and the metric name in the value or using <prefix>/client/<clientid>/request/config and the config name in the value. Allowing wildcards like m.net.* or just * allows to request multiple or all metrics with a single request. Then e.g. the app would request all metrics it wants to display. Encryption: TLS in general seems to be a missing thing with OVMS. Downloading firmware updates over http and not doing any verification is kind of meh. The problem is (as noted in my last email) with the CA certs. My suggestion is to ship with one default CA (e.g. Let's Encrypt) and allow the user to replace it with a different one. I would love to implement this, but I am currently in exam phase and won't have much time to spare until next week. Authentification: If you can live with moving to mosquitto[0] (MQTT broker by eclipse) they have a very good auth plugin[1]. All you have to do is to write an sql statement which receives the hashed passwords from the OVMS database by username (see [2]). If you are on debian its just an apt install mosquitto mosquitto-auth-plugin away. It is very important to set up ACL (access control lists) which make sure only user X can subscribe/publish to ovms/X and noone else. Luckily this is also handeled by mosquitto_auth_plug [0] https://mosquitto.org/ [1] https://github.com/jpmens/mosquitto-auth-plug [2] https://github.com/jpmens/mosquitto-auth-plug#postgresql On Thu, 2018-07-05 at 09:26 +0800, Mark Webb-Johnson wrote:
I am far from an expert in MQTT. Not even a novice. So, the work below is ‘best efforts’. Any help / comments / suggestions would be most appreciated. In particular, a big thanks to Jakob for his contributions to this so far.
With yesterday’s merge, and commits, we have a very very basic OVMS server v3 implementation. It sends the metrics, and doesn’t crash (much). The overall design is as follows:
We use the mongoose mqtt library. We don’t do anything special, and everything is following the mqtt 3.1 standard.
MQTT has the concept of topics. Our default prefix for everything is:
ovms/<mqtt-username>/<vehicleid>/
(that can be customised with config server.v3 topic.prefix)
Metrics are sent as topics. The metric name is added to the topic prefix + “m/“ suffix, and “.” characters converted to “/“ to match the MQTT conventions. The value is simply the metric value (as a string). With our current arrangement, this adds m/m/, m/s/, and m/v/ sub-trees to the the MQTT topic hierarchy. Clients can subscribe to <prefix>/m/# to receive all metrics. The ‘retained’ flag is set on these metrics, at QOS level 0 (so subscribing clients will get a copy of the last known values for these metrics, even with a disconnected vehicle).
The metric s.v3.connected is maintained by the ServerV3 code. When a successful MQTT connection is made, and login successful, that is set true (yes). A MQTT last-will-testament is set so that if the OVMS module disconnects the server will automatically update that to false (no). The ‘retained’ flag is set on this metric, at QOS level 0 (so subscribing clients will get a copy of the last known state). Clients can use this to see if the vehicle is connected to the server.
Connecting clients are expected to write a “1” value to the <prefix>/c/<clientid> topic, and to repeat that write once a minute. They are also expected to use a last-will-testament on that same topic with value “0”. QOS 1 should be used, and these topics should not be retained. The server V3 code subscribes to this <prefix>/c/# topic, so it gets informed of all connected clients. It can then update the s.v3.peers metric appropriately. Clients are expected to monitor the <prefix>/s/v3/connected topic, so that if it becomes ‘yes’ (true) the client should send <prefix>/c/<clientid> “1” immediately. This mechanism allows a newly connected vehicle to immediately know if one or more clients is connected.
The Server v3 sets a timeout for <prefix>/c/<clientid> connections of 2 minutes. If that mqtt topic is not sent again within that time, it is expired (and that client is treated as disconnected).
Similar to the v2 code, the server v3 transmits modified metrics once a minute if there are one or more clients connected, or once every ten minutes if there are no clients connected.
All the above has been implemented. To reach parity with v2, and exceed it’s functionality in places, we have a few more things to do:
Notifications
On the vehicle side, I am proposing to use the <prefix>/notify/<type> namespace for these, using QOS 2 messages without retention. Clients can listen to those, if necessary. We can also have a central daemon running that listens to ovms/+/+/n/# topic pattern to receive these notifications and handle appropriately. Using QOS 2 we can confirm the reception of the notification / historical data, and mark it as delivered, appropriately. However, that would only confirm delivery to the MQTT server, not to the central daemon; if the daemon was not running, the notification would be lost.
Textual Commands
I am proposing to use the <prefix>/cmd/<clientid>/c/ and <prefix>/cmd/<clientid>/r/ namespaces for this, using QOS 2 messages without retention. The value in the /c/ would be the command, and the response would be written to matching the /r/ topic. The commands and responses could be prefixed by an identifier (like in imap protocol) so responses can be matched to commands by the clients). The client side can simply subscribe to itself, and the vehicle side subscribes to <prefix>/cmd/#. In this way, commands cannot be duplicated, and clients don’t see responses to commands they didn’t initiate (which was an issue with v2).
Numeric Commands
I am not sure if we need to implement the numeric commands, as used in the v2 protocol. It seems to me that we can use textual commands.
Config Access
I am not sure if we need this, beyond the command processor. If we do, we could expose a /config/ namespace.
Events
It would be nice to expose events (except for the ticker.* ones, of course). This could be done by a <prefix>/events topic, using QOS 2 and no retention.
Logs
It would be nice to expose logs. This could be done by a <prefix>/logs topic, using QOS 1 and no retention.
Security
We need to add SSL support. I am trying to get an authentication plugin for mosquitto / vernemq written so that we can authenticate straight from the OVMS database that is already running on the servers, and give each user their own ovms/<userid>/# namespace. That way, the configuration for v3 on the vehicle/apps/clients is simple - just put in the server username and password (no separate vehicle passwords necessary).
I think that is it. The above would form the basis of the specification for this. As this is the basis for future work and direction in OVMS, it is important that we get it right, so all comments / suggestions most welcome.
Regards, Mark.
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
Jakob,
I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.
Filtering: In order to decrease used data a blacklist of metrics which are not send (like m.freeram or m.monotonic) would be cool.
Yep. Makes sense.
heartbeats: I've seen you implemented that clients have to send a message every two minutes. A topic like the above .../<clientid>/active which is set to 1 when a client connects and to 0 by its last will would have the same effect but not require heartbeats to be sent every two minutes and thus use less of our precious cellular data.
The problem is if the client does not set the last will correctly, or server/library doesn’t handle it. We’d get stuck thinking a client is connected and use 10 times the data forever. There is also the related issue of a locked-up client (not disconnected). The heartbeat lets us know that the client is both connected and working. Compared to the amount of data coming from the module to client, one single message once a minute client->module, and only when the client App is actually running and connected, is minimal. Any other way to handle the misbehaving client issue?
Requesting values: In order to decrease data usage even more allowing clients to request the value of a metric using <prefix>/client/<clientid>/request/metric and the metric name in the value or using <prefix>/client/<clientid>/request/config and the config name in the value. Allowing wildcards like m.net.* or just * allows to request multiple or all metrics with a single request. Then e.g. the app would request all metrics it wants to display.
ok. Let’s deal with that later. Rather than the traditional broadcast mechanism, there is also the opportunity to have a request-broadcast system where the client tells the module what metrics it is interested in. I am just not sure how that fits into our framework.
Encryption: TLS in general seems to be a missing thing with OVMS. Downloading firmware updates over http and not doing any verification is kind of meh. The problem is (as noted in my last email) with the CA certs. My suggestion is to ship with one default CA (e.g. Let's Encrypt) and allow the user to replace it with a different one. I would love to implement this, but I am currently in exam phase and won't have much time to spare until next week.
I think we can have a list of trusted CAs. Some can be provided as standard in the firmware (for example, api.openvehicles.com <http://api.openvehicles.com/> currently uses '/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root’ so we should have that in there). Others can be provided in the config (or /store file system). We build up our list, to be used by TLS library, by combining the two sources.
Authentification: If you can live with moving to mosquitto[0] (MQTT broker by eclipse) they have a very good auth plugin[1]. All you have to do is to write an sql statement which receives the hashed passwords from the OVMS database by username (see [2]). If you are on debian its just an apt install mosquitto mosquitto-auth-plugin away. It is very important to set up ACL (access control lists) which make sure only user X can subscribe/publish to ovms/X and noone else. Luckily this is also handeled by mosquitto_auth_plug
[0] https://mosquitto.org/ [1] https://github.com/jpmens/mosquitto-auth-plug [2] https://github.com/jpmens/mosquitto-auth-plug#postgresql <https://github.com/jpmens/mosquitto-auth-plug#postgresql> I am currently using mosquitto, but honestly think it severely lacking with regard to plugins (and authentication specifically). While jpmens has written a neat library, there is still no simple way to just check a user+password for authentication. The closest is the HTTP API, but that seems kludgy. I also don’t understand why this is not packaged with mosquitto by default - on centos we have to do a nasty custom build-from-source-code.
The problem with jpmens, and why it won’t work for us, is: The SQL query for looking up a user's password hash is mandatory. The query MUST return a single row only (any other number of rows is considered to be "user not found"), and it MUST return a single column only with the PBKDF2 password hash. Drupal passwords, while stored in a mysql database, are not in PBKDF2 format. The jpmens approach is fine if you want to have a custom database table just for this application, but if you are trying to authenticate against some other application using a different password hash algorithm. By comparison, vernemq includes all these authentication plugins as standard code as part of the main distribution. It also includes a simple system to write hook scripts (including authentication if you want) in Lua (or the base erlang if you are desperate). I’ve been struggling so hard with mosquitto for this, that I’m about ready to give up and just extend the drupal openvehicles extension to allow a username+password to be checked via API. But it is way too much work and way too kludgy for something that really should be simple and commonplace. Regards, Mark.
On 5 Jul 2018, at 5:41 PM, Jakob Löw <ovms@m4gnus.de> wrote:
Hey,
Regarding naming: I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
Filtering: In order to decrease used data a blacklist of metrics which are not send (like m.freeram or m.monotonic) would be cool.
heartbeats: I've seen you implemented that clients have to send a message every two minutes. A topic like the above .../<clientid>/active which is set to 1 when a client connects and to 0 by its last will would have the same effect but not require heartbeats to be sent every two minutes and thus use less of our precious cellular data.
Requesting values: In order to decrease data usage even more allowing clients to request the value of a metric using <prefix>/client/<clientid>/request/metric and the metric name in the value or using <prefix>/client/<clientid>/request/config and the config name in the value. Allowing wildcards like m.net.* or just * allows to request multiple or all metrics with a single request. Then e.g. the app would request all metrics it wants to display.
Encryption: TLS in general seems to be a missing thing with OVMS. Downloading firmware updates over http and not doing any verification is kind of meh. The problem is (as noted in my last email) with the CA certs. My suggestion is to ship with one default CA (e.g. Let's Encrypt) and allow the user to replace it with a different one. I would love to implement this, but I am currently in exam phase and won't have much time to spare until next week.
Authentification: If you can live with moving to mosquitto[0] (MQTT broker by eclipse) they have a very good auth plugin[1]. All you have to do is to write an sql statement which receives the hashed passwords from the OVMS database by username (see [2]). If you are on debian its just an apt install mosquitto mosquitto-auth-plugin away. It is very important to set up ACL (access control lists) which make sure only user X can subscribe/publish to ovms/X and noone else. Luckily this is also handeled by mosquitto_auth_plug
[0] https://mosquitto.org/ [1] https://github.com/jpmens/mosquitto-auth-plug [2] https://github.com/jpmens/mosquitto-auth-plug#postgresql
On Thu, 2018-07-05 at 09:26 +0800, Mark Webb-Johnson wrote:
I am far from an expert in MQTT. Not even a novice. So, the work below is ‘best efforts’. Any help / comments / suggestions would be most appreciated. In particular, a big thanks to Jakob for his contributions to this so far.
With yesterday’s merge, and commits, we have a very very basic OVMS server v3 implementation. It sends the metrics, and doesn’t crash (much). The overall design is as follows:
We use the mongoose mqtt library. We don’t do anything special, and everything is following the mqtt 3.1 standard.
MQTT has the concept of topics. Our default prefix for everything is:
ovms/<mqtt-username>/<vehicleid>/
(that can be customised with config server.v3 topic.prefix)
Metrics are sent as topics. The metric name is added to the topic prefix + “m/“ suffix, and “.” characters converted to “/“ to match the MQTT conventions. The value is simply the metric value (as a string). With our current arrangement, this adds m/m/, m/s/, and m/v/ sub-trees to the the MQTT topic hierarchy. Clients can subscribe to <prefix>/m/# to receive all metrics. The ‘retained’ flag is set on these metrics, at QOS level 0 (so subscribing clients will get a copy of the last known values for these metrics, even with a disconnected vehicle).
The metric s.v3.connected is maintained by the ServerV3 code. When a successful MQTT connection is made, and login successful, that is set true (yes). A MQTT last-will-testament is set so that if the OVMS module disconnects the server will automatically update that to false (no). The ‘retained’ flag is set on this metric, at QOS level 0 (so subscribing clients will get a copy of the last known state). Clients can use this to see if the vehicle is connected to the server.
Connecting clients are expected to write a “1” value to the <prefix>/c/<clientid> topic, and to repeat that write once a minute. They are also expected to use a last-will-testament on that same topic with value “0”. QOS 1 should be used, and these topics should not be retained. The server V3 code subscribes to this <prefix>/c/# topic, so it gets informed of all connected clients. It can then update the s.v3.peers metric appropriately. Clients are expected to monitor the <prefix>/s/v3/connected topic, so that if it becomes ‘yes’ (true) the client should send <prefix>/c/<clientid> “1” immediately. This mechanism allows a newly connected vehicle to immediately know if one or more clients is connected.
The Server v3 sets a timeout for <prefix>/c/<clientid> connections of 2 minutes. If that mqtt topic is not sent again within that time, it is expired (and that client is treated as disconnected).
Similar to the v2 code, the server v3 transmits modified metrics once a minute if there are one or more clients connected, or once every ten minutes if there are no clients connected.
All the above has been implemented. To reach parity with v2, and exceed it’s functionality in places, we have a few more things to do:
Notifications
On the vehicle side, I am proposing to use the <prefix>/notify/<type> namespace for these, using QOS 2 messages without retention. Clients can listen to those, if necessary. We can also have a central daemon running that listens to ovms/+/+/n/# topic pattern to receive these notifications and handle appropriately. Using QOS 2 we can confirm the reception of the notification / historical data, and mark it as delivered, appropriately. However, that would only confirm delivery to the MQTT server, not to the central daemon; if the daemon was not running, the notification would be lost.
Textual Commands
I am proposing to use the <prefix>/cmd/<clientid>/c/ and <prefix>/cmd/<clientid>/r/ namespaces for this, using QOS 2 messages without retention. The value in the /c/ would be the command, and the response would be written to matching the /r/ topic. The commands and responses could be prefixed by an identifier (like in imap protocol) so responses can be matched to commands by the clients). The client side can simply subscribe to itself, and the vehicle side subscribes to <prefix>/cmd/#. In this way, commands cannot be duplicated, and clients don’t see responses to commands they didn’t initiate (which was an issue with v2).
Numeric Commands
I am not sure if we need to implement the numeric commands, as used in the v2 protocol. It seems to me that we can use textual commands.
Config Access
I am not sure if we need this, beyond the command processor. If we do, we could expose a /config/ namespace.
Events
It would be nice to expose events (except for the ticker.* ones, of course). This could be done by a <prefix>/events topic, using QOS 2 and no retention.
Logs
It would be nice to expose logs. This could be done by a <prefix>/logs topic, using QOS 1 and no retention.
Security
We need to add SSL support. I am trying to get an authentication plugin for mosquitto / vernemq written so that we can authenticate straight from the OVMS database that is already running on the servers, and give each user their own ovms/<userid>/# namespace. That way, the configuration for v3 on the vehicle/apps/clients is simple - just put in the server username and password (no separate vehicle passwords necessary).
I think that is it. The above would form the basis of the specification for this. As this is the basis for future work and direction in OVMS, it is important that we get it right, so all comments / suggestions most welcome.
Regards, Mark.
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
heartbeats: I've seen you implemented that clients have to send a message every two minutes. A topic like the above .../<clientid>/active which is set to 1 when a client connects and to 0 by its last will would have the same effect but not require heartbeats to be sent every two minutes and thus use less of our precious cellular data.
The problem is if the client does not set the last will correctly, or server/library doesn’t handle it. We’d get stuck thinking a client is connected and use 10 times the data forever.
There is also the related issue of a locked-up client (not disconnected). The heartbeat lets us know that the client is both connected and working.
Compared to the amount of data coming from the module to client, one single message once a minute client->module, and only when the client App is actually running and connected, is minimal.
Any other way to handle the misbehaving client issue?
To provide another example for this. I’m using the mqtt.fx desktop client. That can setup a last will just fine, but if you press the disconnect button then the last will is not sent to the ovms module. Force quitting the application does correctly send the last will. It seems that the mqtt specification says that if the client _cleanly_ disconnects (with the DISCONNECT mqtt message) then the last will should not be published. Urgh. So, our full spec for this should be: Client should publish "<prefix>/client/<clientid>/active = 1”: When it first connects Every subsequent 60 seconds When it receives a "<prefix>/metric/s/v3/connected = yes" Client should publish "<prefix>/client/<clientid>/active = 0”: As it’s MQTT last-will-and-testament Prior to a controlled disconnection QOS 0, not retained, is fine for these. Regards, Mark. P.S. I’ve made the other changes, for topic hierarchy. It seems ok; committed and pushed.
On 6 Jul 2018, at 11:15 AM, Mark Webb-Johnson <mark@webb-johnson.net> wrote:
Jakob,
I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.
Filtering: In order to decrease used data a blacklist of metrics which are not send (like m.freeram or m.monotonic) would be cool.
Yep. Makes sense.
heartbeats: I've seen you implemented that clients have to send a message every two minutes. A topic like the above .../<clientid>/active which is set to 1 when a client connects and to 0 by its last will would have the same effect but not require heartbeats to be sent every two minutes and thus use less of our precious cellular data.
The problem is if the client does not set the last will correctly, or server/library doesn’t handle it. We’d get stuck thinking a client is connected and use 10 times the data forever.
There is also the related issue of a locked-up client (not disconnected). The heartbeat lets us know that the client is both connected and working.
Compared to the amount of data coming from the module to client, one single message once a minute client->module, and only when the client App is actually running and connected, is minimal.
Any other way to handle the misbehaving client issue?
Requesting values: In order to decrease data usage even more allowing clients to request the value of a metric using <prefix>/client/<clientid>/request/metric and the metric name in the value or using <prefix>/client/<clientid>/request/config and the config name in the value. Allowing wildcards like m.net <http://m.net/>.* or just * allows to request multiple or all metrics with a single request. Then e.g. the app would request all metrics it wants to display.
ok. Let’s deal with that later. Rather than the traditional broadcast mechanism, there is also the opportunity to have a request-broadcast system where the client tells the module what metrics it is interested in. I am just not sure how that fits into our framework.
Encryption: TLS in general seems to be a missing thing with OVMS. Downloading firmware updates over http and not doing any verification is kind of meh. The problem is (as noted in my last email) with the CA certs. My suggestion is to ship with one default CA (e.g. Let's Encrypt) and allow the user to replace it with a different one. I would love to implement this, but I am currently in exam phase and won't have much time to spare until next week.
I think we can have a list of trusted CAs. Some can be provided as standard in the firmware (for example, api.openvehicles.com <http://api.openvehicles.com/> currently uses '/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root’ so we should have that in there). Others can be provided in the config (or /store file system). We build up our list, to be used by TLS library, by combining the two sources.
Authentification: If you can live with moving to mosquitto[0] (MQTT broker by eclipse) they have a very good auth plugin[1]. All you have to do is to write an sql statement which receives the hashed passwords from the OVMS database by username (see [2]). If you are on debian its just an apt install mosquitto mosquitto-auth-plugin away. It is very important to set up ACL (access control lists) which make sure only user X can subscribe/publish to ovms/X and noone else. Luckily this is also handeled by mosquitto_auth_plug
[0] https://mosquitto.org/ <https://mosquitto.org/> [1] https://github.com/jpmens/mosquitto-auth-plug <https://github.com/jpmens/mosquitto-auth-plug> [2] https://github.com/jpmens/mosquitto-auth-plug#postgresql <https://github.com/jpmens/mosquitto-auth-plug#postgresql> I am currently using mosquitto, but honestly think it severely lacking with regard to plugins (and authentication specifically). While jpmens has written a neat library, there is still no simple way to just check a user+password for authentication. The closest is the HTTP API, but that seems kludgy. I also don’t understand why this is not packaged with mosquitto by default - on centos we have to do a nasty custom build-from-source-code.
The problem with jpmens, and why it won’t work for us, is:
The SQL query for looking up a user's password hash is mandatory. The query MUST return a single row only (any other number of rows is considered to be "user not found"), and it MUST return a single column only with the PBKDF2 password hash.
Drupal passwords, while stored in a mysql database, are not in PBKDF2 format. The jpmens approach is fine if you want to have a custom database table just for this application, but if you are trying to authenticate against some other application using a different password hash algorithm.
By comparison, vernemq includes all these authentication plugins as standard code as part of the main distribution. It also includes a simple system to write hook scripts (including authentication if you want) in Lua (or the base erlang if you are desperate).
I’ve been struggling so hard with mosquitto for this, that I’m about ready to give up and just extend the drupal openvehicles extension to allow a username+password to be checked via API. But it is way too much work and way too kludgy for something that really should be simple and commonplace.
Regards, Mark.
On 5 Jul 2018, at 5:41 PM, Jakob Löw <ovms@m4gnus.de <mailto:ovms@m4gnus.de>> wrote:
Hey,
Regarding naming: I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
Filtering: In order to decrease used data a blacklist of metrics which are not send (like m.freeram or m.monotonic) would be cool.
heartbeats: I've seen you implemented that clients have to send a message every two minutes. A topic like the above .../<clientid>/active which is set to 1 when a client connects and to 0 by its last will would have the same effect but not require heartbeats to be sent every two minutes and thus use less of our precious cellular data.
Requesting values: In order to decrease data usage even more allowing clients to request the value of a metric using <prefix>/client/<clientid>/request/metric and the metric name in the value or using <prefix>/client/<clientid>/request/config and the config name in the value. Allowing wildcards like m.net <http://m.net/>.* or just * allows to request multiple or all metrics with a single request. Then e.g. the app would request all metrics it wants to display.
Encryption: TLS in general seems to be a missing thing with OVMS. Downloading firmware updates over http and not doing any verification is kind of meh. The problem is (as noted in my last email) with the CA certs. My suggestion is to ship with one default CA (e.g. Let's Encrypt) and allow the user to replace it with a different one. I would love to implement this, but I am currently in exam phase and won't have much time to spare until next week.
Authentification: If you can live with moving to mosquitto[0] (MQTT broker by eclipse) they have a very good auth plugin[1]. All you have to do is to write an sql statement which receives the hashed passwords from the OVMS database by username (see [2]). If you are on debian its just an apt install mosquitto mosquitto-auth-plugin away. It is very important to set up ACL (access control lists) which make sure only user X can subscribe/publish to ovms/X and noone else. Luckily this is also handeled by mosquitto_auth_plug
[0] https://mosquitto.org/ <https://mosquitto.org/> [1] https://github.com/jpmens/mosquitto-auth-plug <https://github.com/jpmens/mosquitto-auth-plug> [2] https://github.com/jpmens/mosquitto-auth-plug#postgresql <https://github.com/jpmens/mosquitto-auth-plug#postgresql>
On Thu, 2018-07-05 at 09:26 +0800, Mark Webb-Johnson wrote:
I am far from an expert in MQTT. Not even a novice. So, the work below is ‘best efforts’. Any help / comments / suggestions would be most appreciated. In particular, a big thanks to Jakob for his contributions to this so far.
With yesterday’s merge, and commits, we have a very very basic OVMS server v3 implementation. It sends the metrics, and doesn’t crash (much). The overall design is as follows:
We use the mongoose mqtt library. We don’t do anything special, and everything is following the mqtt 3.1 standard.
MQTT has the concept of topics. Our default prefix for everything is:
ovms/<mqtt-username>/<vehicleid>/
(that can be customised with config server.v3 topic.prefix)
Metrics are sent as topics. The metric name is added to the topic prefix + “m/“ suffix, and “.” characters converted to “/“ to match the MQTT conventions. The value is simply the metric value (as a string). With our current arrangement, this adds m/m/, m/s/, and m/v/ sub-trees to the the MQTT topic hierarchy. Clients can subscribe to <prefix>/m/# to receive all metrics. The ‘retained’ flag is set on these metrics, at QOS level 0 (so subscribing clients will get a copy of the last known values for these metrics, even with a disconnected vehicle).
The metric s.v3.connected is maintained by the ServerV3 code. When a successful MQTT connection is made, and login successful, that is set true (yes). A MQTT last-will-testament is set so that if the OVMS module disconnects the server will automatically update that to false (no). The ‘retained’ flag is set on this metric, at QOS level 0 (so subscribing clients will get a copy of the last known state). Clients can use this to see if the vehicle is connected to the server.
Connecting clients are expected to write a “1” value to the <prefix>/c/<clientid> topic, and to repeat that write once a minute. They are also expected to use a last-will-testament on that same topic with value “0”. QOS 1 should be used, and these topics should not be retained. The server V3 code subscribes to this <prefix>/c/# topic, so it gets informed of all connected clients. It can then update the s.v3.peers metric appropriately. Clients are expected to monitor the <prefix>/s/v3/connected topic, so that if it becomes ‘yes’ (true) the client should send <prefix>/c/<clientid> “1” immediately. This mechanism allows a newly connected vehicle to immediately know if one or more clients is connected.
The Server v3 sets a timeout for <prefix>/c/<clientid> connections of 2 minutes. If that mqtt topic is not sent again within that time, it is expired (and that client is treated as disconnected).
Similar to the v2 code, the server v3 transmits modified metrics once a minute if there are one or more clients connected, or once every ten minutes if there are no clients connected.
All the above has been implemented. To reach parity with v2, and exceed it’s functionality in places, we have a few more things to do:
Notifications
On the vehicle side, I am proposing to use the <prefix>/notify/<type> namespace for these, using QOS 2 messages without retention. Clients can listen to those, if necessary. We can also have a central daemon running that listens to ovms/+/+/n/# topic pattern to receive these notifications and handle appropriately. Using QOS 2 we can confirm the reception of the notification / historical data, and mark it as delivered, appropriately. However, that would only confirm delivery to the MQTT server, not to the central daemon; if the daemon was not running, the notification would be lost.
Textual Commands
I am proposing to use the <prefix>/cmd/<clientid>/c/ and <prefix>/cmd/<clientid>/r/ namespaces for this, using QOS 2 messages without retention. The value in the /c/ would be the command, and the response would be written to matching the /r/ topic. The commands and responses could be prefixed by an identifier (like in imap protocol) so responses can be matched to commands by the clients). The client side can simply subscribe to itself, and the vehicle side subscribes to <prefix>/cmd/#. In this way, commands cannot be duplicated, and clients don’t see responses to commands they didn’t initiate (which was an issue with v2).
Numeric Commands
I am not sure if we need to implement the numeric commands, as used in the v2 protocol. It seems to me that we can use textual commands.
Config Access
I am not sure if we need this, beyond the command processor. If we do, we could expose a /config/ namespace.
Events
It would be nice to expose events (except for the ticker.* ones, of course). This could be done by a <prefix>/events topic, using QOS 2 and no retention.
Logs
It would be nice to expose logs. This could be done by a <prefix>/logs topic, using QOS 1 and no retention.
Security
We need to add SSL support. I am trying to get an authentication plugin for mosquitto / vernemq written so that we can authenticate straight from the OVMS database that is already running on the servers, and give each user their own ovms/<userid>/# namespace. That way, the configuration for v3 on the vehicle/apps/clients is simple - just put in the server username and password (no separate vehicle passwords necessary).
I think that is it. The above would form the basis of the specification for this. As this is the basis for future work and direction in OVMS, it is important that we get it right, so all comments / suggestions most welcome.
Regards, Mark.
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com <mailto:OvmsDev@lists.openvehicles.com> http://lists.openvehicles.com/mailman/listinfo/ovmsdev
OvmsDev mailing list OvmsDev@lists.openvehicles.com <mailto:OvmsDev@lists.openvehicles.com> http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.
Ok. Now implemented, and tested: <prefix>/metric/# <prefix>/client/<clientid>/active <prefix>/client/<clientid>/command/<command id> <prefix>/client/<clientid>/response/<command id> I don’t think config, logs, and requests are critical or urgent. So, I will try to finish events tonight (as that is relatively simple). Notifications over the weekend (more tricky, especially for historical data). It can go in my car with tonight’s nightly ota, for real world testing. I’m still trying to get the authentication working for drupal vs mosquitto. Once that is done, I can open up api.openvehicles.com <http://api.openvehicles.com/> MQTT for public use. If I can’t get it done within the next couple of days, I’ll try another broker (lua scripting, anyone?). P.S. Commands over MQTT are pretty cool: Regards, Mark.
I’ve just pushed the support for notifications in OVMS Server v3. With that, the server v3 is functionally complete (at least on the car side). We can: Connect/disconnect Handle lists of apps connecting/disconnecting Send metrics Send events Send notifications (including info, error, alert, and historical data) Receive commands, run them, and return the results I’m still struggling against drupal authentication at the server side; once that is done we can open this up to wider testing. Overall, I’m pretty happy with it. We need SSL/TLS support, but at least functionally now it works and is complete. Regards, Mark
On 6 Jul 2018, at 4:16 PM, Mark Webb-Johnson <mark@webb-johnson.net> wrote:
I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.
Ok. Now implemented, and tested:
<prefix>/metric/# <prefix>/client/<clientid>/active <prefix>/client/<clientid>/command/<command id> <prefix>/client/<clientid>/response/<command id>
I don’t think config, logs, and requests are critical or urgent. So, I will try to finish events tonight (as that is relatively simple). Notifications over the weekend (more tricky, especially for historical data). It can go in my car with tonight’s nightly ota, for real world testing.
I’m still trying to get the authentication working for drupal vs mosquitto. Once that is done, I can open up api.openvehicles.com <http://api.openvehicles.com/> MQTT for public use. If I can’t get it done within the next couple of days, I’ll try another broker (lua scripting, anyone?).
P.S. Commands over MQTT are pretty cool:
<PastedGraphic-2.tiff> <PastedGraphic-3.tiff>
Regards, Mark. _______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
Drupal authentication is done, and working against mosquitto now on the live api.openvehicles.com <http://api.openvehicles.com/> site. Plain text MQTT only at the moment (I still need to get mosquitto and OVMS module firmware supporting SSL). To configure OVMS v3 to use Server V3 protocol: config set: vehicle id <vehicleid> server.v3 server api.openvehicles.com <http://api.openvehicles.com/> server.v3 port 1883 server.v3 user <openvehicles.com <http://openvehicles.com/> username> password server.v3 <openvehicles.com <http://openvehicles.com/> password> server v3 start Can also 'config set auto server.v3 yes’ to auto-start at boot If you have an MQTT client, you can connect to the same server to query the metrics, issue commands, etc. The default topic hierarchy is ovms/<username>/<vehicleid>. I think we are close now. Just need SSL support. And then Apps… Feedback appreciated. Regards, Mark.
On 10 Jul 2018, at 2:42 PM, Mark Webb-Johnson <mark@webb-johnson.net> wrote:
I’ve just pushed the support for notifications in OVMS Server v3.
With that, the server v3 is functionally complete (at least on the car side). We can:
Connect/disconnect Handle lists of apps connecting/disconnecting Send metrics Send events Send notifications (including info, error, alert, and historical data) Receive commands, run them, and return the results
I’m still struggling against drupal authentication at the server side; once that is done we can open this up to wider testing.
Overall, I’m pretty happy with it. We need SSL/TLS support, but at least functionally now it works and is complete.
Regards, Mark
On 6 Jul 2018, at 4:16 PM, Mark Webb-Johnson <mark@webb-johnson.net <mailto:mark@webb-johnson.net>> wrote:
I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.
Ok. Now implemented, and tested:
<prefix>/metric/# <prefix>/client/<clientid>/active <prefix>/client/<clientid>/command/<command id> <prefix>/client/<clientid>/response/<command id>
I don’t think config, logs, and requests are critical or urgent. So, I will try to finish events tonight (as that is relatively simple). Notifications over the weekend (more tricky, especially for historical data). It can go in my car with tonight’s nightly ota, for real world testing.
I’m still trying to get the authentication working for drupal vs mosquitto. Once that is done, I can open up api.openvehicles.com <http://api.openvehicles.com/> MQTT for public use. If I can’t get it done within the next couple of days, I’ll try another broker (lua scripting, anyone?).
P.S. Commands over MQTT are pretty cool:
<PastedGraphic-2.tiff> <PastedGraphic-3.tiff>
Regards, Mark. _______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com <mailto:OvmsDev@lists.openvehicles.com> http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
I’ve also committed (in client/ovms_v3shell.pl) a small demonstration command line client written in perl: # ./ovms_v3shell.pl --username MYUSER --id MYCAR --server api.openvehicles.com <http://api.openvehicles.com/> --password TOPSECRET Connecting to api.openvehicles.com:1883/MYUSER… OVMS# metric list m. m.freeram 4219244 m.hardware OVMS WIFI BLE BT cores=2 rev=ESP32/1 m.monotonic 50197Sec m.net.mdm.iccid 8944500BEEFDEADBEEF m.net.mdm.model 35316B11SIM5360E m.net.provider CSL Hologram m.net.sq -95dBm m.net.type modem m.serial 30:ae:de:ad:be:ef m.tasks 16 m.time.utc m.version 3.1.008-51-g960e1c3/ota_0/edge (build idf v3.1-dev-1583-g0fb2019 Jul 11 2018 00:00:34) v.m.rpm v.m.temp OVMS# module memory Free 8-bit 140088/282736, 32-bit 7304/23560, SPIRAM 4068736/4194252 --Task-- Total DRAM D/IRAM IRAM SPIRAM +/- DRAM D/IRAM IRAM SPIRAM main* 8704 0 0 8696 +8704 +0 +0 +8696 OVMS NetMan* 0 0 0 64 +0 +0 +0 +64 no task* 5376 0 0 0 +5376 +0 +0 +0 esp_timer 23884 0 644 44600 +23884 +0 +644 +44600 OVMS Events 67676 0 0 24000 +67676 +0 +0 +24000 ipc0 7776 0 0 0 +7776 +0 +0 +0 ipc1 12 0 0 0 +12 +0 +0 +0 Tmr Svc 88 0 0 0 +88 +0 +0 +0 tiT 700 0 0 5948 +700 +0 +0 +5948 OVMS SIMCOM 0 0 0 3888 +0 +0 +0 +3888 wifi 9496 0 0 2828 +9496 +0 +0 +2828 OVMS Vehicle 0 0 0 32 +0 +0 +0 +32 OVMS Console 0 0 0 20 +0 +0 +0 +20 mdns 104 0 0 4 +104 +0 +0 +4 OVMS NetMan 12584 0 15488 3472 +12584 +0 +15488 +3472 That sends commands to the car over MQTT (so works over cellular as well as wifi networks). Regards, Mark. P.S. It is pretty rudimentary at the moment, and doesn’t catch errors that well, but is a good starting point. It does follow the OVMS v3 protocol conventions, and handles last will and testament, etc, correctly.
On 16 Jul 2018, at 12:47 PM, Mark Webb-Johnson <mark@webb-johnson.net> wrote:
Drupal authentication is done, and working against mosquitto now on the live api.openvehicles.com <http://api.openvehicles.com/> site. Plain text MQTT only at the moment (I still need to get mosquitto and OVMS module firmware supporting SSL).
To configure OVMS v3 to use Server V3 protocol:
config set: vehicle id <vehicleid> server.v3 server api.openvehicles.com <http://api.openvehicles.com/> server.v3 port 1883 server.v3 user <openvehicles.com <http://openvehicles.com/> username> password server.v3 <openvehicles.com <http://openvehicles.com/> password>
server v3 start
Can also 'config set auto server.v3 yes’ to auto-start at boot
If you have an MQTT client, you can connect to the same server to query the metrics, issue commands, etc. The default topic hierarchy is ovms/<username>/<vehicleid>.
I think we are close now. Just need SSL support. And then Apps…
Feedback appreciated.
Regards, Mark.
On 10 Jul 2018, at 2:42 PM, Mark Webb-Johnson <mark@webb-johnson.net <mailto:mark@webb-johnson.net>> wrote:
I’ve just pushed the support for notifications in OVMS Server v3.
With that, the server v3 is functionally complete (at least on the car side). We can:
Connect/disconnect Handle lists of apps connecting/disconnecting Send metrics Send events Send notifications (including info, error, alert, and historical data) Receive commands, run them, and return the results
I’m still struggling against drupal authentication at the server side; once that is done we can open this up to wider testing.
Overall, I’m pretty happy with it. We need SSL/TLS support, but at least functionally now it works and is complete.
Regards, Mark
On 6 Jul 2018, at 4:16 PM, Mark Webb-Johnson <mark@webb-johnson.net <mailto:mark@webb-johnson.net>> wrote:
I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.
Ok. Now implemented, and tested:
<prefix>/metric/# <prefix>/client/<clientid>/active <prefix>/client/<clientid>/command/<command id> <prefix>/client/<clientid>/response/<command id>
I don’t think config, logs, and requests are critical or urgent. So, I will try to finish events tonight (as that is relatively simple). Notifications over the weekend (more tricky, especially for historical data). It can go in my car with tonight’s nightly ota, for real world testing.
I’m still trying to get the authentication working for drupal vs mosquitto. Once that is done, I can open up api.openvehicles.com <http://api.openvehicles.com/> MQTT for public use. If I can’t get it done within the next couple of days, I’ll try another broker (lua scripting, anyone?).
P.S. Commands over MQTT are pretty cool:
<PastedGraphic-2.tiff> <PastedGraphic-3.tiff>
Regards, Mark. _______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com <mailto:OvmsDev@lists.openvehicles.com> http://lists.openvehicles.com/mailman/listinfo/ovmsdev <http://lists.openvehicles.com/mailman/listinfo/ovmsdev>
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com <mailto:OvmsDev@lists.openvehicles.com> http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
Hey, I've been playing around a bit with implementing SSL/TLS support. For starters I wanted to implement an OvmsNetTlsConnection class which could then be used for https requests. At first I tried using wolfSSL but it turns out wolfSSL is compiled with WOLFCRYPT_ONLY. OpenSSL also didn't work as SSL_CTX_load_verify_locations is missing in ESP-IDF's OpenSSL. I didn't try mbedtls yet. Is there a reason why OVMS bundles three different SSL libraries? Also why is there a custom HTTP implementation when mongoose already has one? Speaking of mongoose, it seems to have an abstraction layer which allows to use one of the three SSL/TLS libraries. IMO the best way would be to rewrite ovms_net and ovms_http to use mongoose and configure mongoose to compile with SSL support (probably mbedtls as the other two don't work). - Jakob On Mon, 2018-07-16 at 12:47 +0800, Mark Webb-Johnson wrote:
Drupal authentication is done, and working against mosquitto now on the live api.openvehicles.com site. Plain text MQTT only at the moment (I still need to get mosquitto and OVMS module firmware supporting SSL).
To configure OVMS v3 to use Server V3 protocol:
config set: vehicle id <vehicleid> server.v3 server api.openvehicles.com server.v3 port 1883 server.v3 user <openvehicles.com username> password server.v3 <openvehicles.com password>
server v3 start
Can also 'config set auto server.v3 yes’ to auto-start at boot
If you have an MQTT client, you can connect to the same server to query the metrics, issue commands, etc. The default topic hierarchy is ovms/<username>/<vehicleid>.
I think we are close now. Just need SSL support. And then Apps…
Feedback appreciated.
Regards, Mark.
On 10 Jul 2018, at 2:42 PM, Mark Webb-Johnson <mark@webb-johnson.ne t> wrote:
I’ve just pushed the support for notifications in OVMS Server v3.
With that, the server v3 is functionally complete (at least on the car side). We can:
Connect/disconnect Handle lists of apps connecting/disconnecting Send metrics Send events Send notifications (including info, error, alert, and historical data) Receive commands, run them, and return the results
I’m still struggling against drupal authentication at the server side; once that is done we can open this up to wider testing.
Overall, I’m pretty happy with it. We need SSL/TLS support, but at least functionally now it works and is complete.
Regards, Mark
On 6 Jul 2018, at 4:16 PM, Mark Webb-Johnson <mark@webb-johnson.n et> wrote:
I suppose the following topic names: metrics: <prefix>/metric/# events: <prefix>/event/# notifications: <prefix>/notify/# config: <prefix>/config/# logs: <prefix>/log/<tag> active: <prefix>/client/<clientid>/active requests: <prefix>/client/<clientid>/request/# commands: <prefix>/client/<clientid>/command/<command id> cmd responses: <prefix>/client/<clientid>/response/<command id>
All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.
Ok. Now implemented, and tested:
<prefix>/metric/# <prefix>/client/<clientid>/active <prefix>/client/<clientid>/command/<command id> <prefix>/client/<clientid>/response/<command id>
I don’t think config, logs, and requests are critical or urgent. So, I will try to finish events tonight (as that is relatively simple). Notifications over the weekend (more tricky, especially for historical data). It can go in my car with tonight’s nightly ota, for real world testing.
I’m still trying to get the authentication working for drupal vs mosquitto. Once that is done, I can open up api.openvehicles.com MQTT for public use. If I can’t get it done within the next couple of days, I’ll try another broker (lua scripting, anyone?).
P.S. Commands over MQTT are pretty cool:
<PastedGraphic-2.tiff> <PastedGraphic-3.tiff>
Regards, Mark. _______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
Jakob, The only reason for configuring WOLFCRYPT_ONLY was to save space. If the WolfSSL functionality would be useful, then we can change that. The makefile also explicitly controls the subset of source files to be compiled, so that would need to change. There is a new release of WolfSSH that incorporated the extensions I made, but with some changes, so I need to look at converting over to that new release. -- Steve On Mon, 16 Jul 2018, Jakob L?w wrote:
Hey,
I've been playing around a bit with implementing SSL/TLS support. For starters I wanted to implement an OvmsNetTlsConnection class which could then be used for https requests. At first I tried using wolfSSL but it turns out wolfSSL is compiled with WOLFCRYPT_ONLY. OpenSSL also didn't work as SSL_CTX_load_verify_locations is missing in ESP-IDF's OpenSSL. I didn't try mbedtls yet. Is there a reason why OVMS bundles three different SSL libraries? Also why is there a custom HTTP implementation when mongoose already has one? Speaking of mongoose, it seems to have an abstraction layer which allows to use one of the three SSL/TLS libraries. IMO the best way would be to rewrite ovms_net and ovms_http to use mongoose and configure mongoose to compile with SSL support (probably mbedtls as the other two don't work).
- Jakob
Jacob,
Is there a reason why OVMS bundles three different SSL libraries?
I think wolfSSL is there (but WOLFCRYPT_ONLY) because wolfSSH needs it. The ESP-IDF offers mbedtls and openssl libraries, but we don’t currently use either.
Also why is there a custom HTTP implementation when mongoose already has one?
Primarily because we wrote those before we included mongoose into the project. But also because the mongoose http_client code sucks. It reads the entire message body into RAM, before passing it on to the client; that just won’t work for something like a firmware image. To complicate the choice further, ESP now includes a esp_http_client in their latest 3.x IDF (which wasn’t there before). I did try to convert our TCP and HTTP client libraries to mongoose a while ago, but failed. Those libraries are blocking (with i/o done in the context of the thread calling them), but the mongoose library is event-based non-blocking (with i/o output in the context of the calling thread, and i/o input in the context of the mongoose thread). Mongoose didn’t use to be thread safe, and that caused us all sorts of issues (now solved, by the work Michael and others did to make it thread safe).
IMO the best way would be to rewrite ovms_net and ovms_http to use mongoose and configure mongoose to compile with SSL support (probably mbedtls as the other two don't work).
I agree. In general, I prefer the mongoose approach, and we are trying to standardise on that (for good or bad). Probably the best would be to fix the http client in mongoose to work properly (or at least have an option to deliver the body block by block as they arrive), then convert our stuff that uses ovms tcp/http libraries, then drop the tcp and http libraries. Not trivial given the different models (blocking vs non-blocking events). Between openssl vs mbedtls, I don’t really care. I think we’re going to have to manage certificate loading ourselves anyway (as we need to load from a combination of statically defined (in flash using COMPONENT_EMBED_TXTFILES and asm(“_binary_* start/end) and dynamic (in /store/tls/trustedca/* or somewhere like that). Regards, Mark.
On 17 Jul 2018, at 1:02 AM, Jakob Löw <ovms@m4gnus.de> wrote:
Hey,
I've been playing around a bit with implementing SSL/TLS support. For starters I wanted to implement an OvmsNetTlsConnection class which could then be used for https requests. At first I tried using wolfSSL but it turns out wolfSSL is compiled with WOLFCRYPT_ONLY. OpenSSL also didn't work as SSL_CTX_load_verify_locations is missing in ESP-IDF's OpenSSL. I didn't try mbedtls yet. Is there a reason why OVMS bundles three different SSL libraries? Also why is there a custom HTTP implementation when mongoose already has one? Speaking of mongoose, it seems to have an abstraction layer which allows to use one of the three SSL/TLS libraries. IMO the best way would be to rewrite ovms_net and ovms_http to use mongoose and configure mongoose to compile with SSL support (probably mbedtls as the other two don't work).
- Jakob
On Mon, 2018-07-16 at 12:47 +0800, Mark Webb-Johnson wrote:
Drupal authentication is done, and working against mosquitto now on the live api.openvehicles.com site. Plain text MQTT only at the moment (I still need to get mosquitto and OVMS module firmware supporting SSL).
To configure OVMS v3 to use Server V3 protocol:
config set: vehicle id <vehicleid> server.v3 server api.openvehicles.com server.v3 port 1883 server.v3 user <openvehicles.com username> password server.v3 <openvehicles.com password>
server v3 start
Can also 'config set auto server.v3 yes’ to auto-start at boot
If you have an MQTT client, you can connect to the same server to query the metrics, issue commands, etc. The default topic hierarchy is ovms/<username>/<vehicleid>.
I think we are close now. Just need SSL support. And then Apps…
Feedback appreciated.
Regards, Mark.
On 10 Jul 2018, at 2:42 PM, Mark Webb-Johnson <mark@webb-johnson.ne t> wrote:
I’ve just pushed the support for notifications in OVMS Server v3.
With that, the server v3 is functionally complete (at least on the car side). We can:
Connect/disconnect Handle lists of apps connecting/disconnecting Send metrics Send events Send notifications (including info, error, alert, and historical data) Receive commands, run them, and return the results
I’m still struggling against drupal authentication at the server side; once that is done we can open this up to wider testing.
Overall, I’m pretty happy with it. We need SSL/TLS support, but at least functionally now it works and is complete.
Regards, Mark
On 6 Jul 2018, at 4:16 PM, Mark Webb-Johnson <mark@webb-johnson.n et> wrote:
> I suppose the following topic names: > metrics: <prefix>/metric/# > events: <prefix>/event/# > notifications: <prefix>/notify/# > config: <prefix>/config/# > logs: <prefix>/log/<tag> > active: <prefix>/client/<clientid>/active > requests: <prefix>/client/<clientid>/request/# > commands: <prefix>/client/<clientid>/command/<command > id> > cmd responses: <prefix>/client/<clientid>/response/<command > id>
All ok. I am fine with this, and looks clean. I will make the changes today, as I want to get this into some cars asap so we can get a feel for how it behaves.
Ok. Now implemented, and tested:
<prefix>/metric/# <prefix>/client/<clientid>/active <prefix>/client/<clientid>/command/<command id> <prefix>/client/<clientid>/response/<command id>
I don’t think config, logs, and requests are critical or urgent. So, I will try to finish events tonight (as that is relatively simple). Notifications over the weekend (more tricky, especially for historical data). It can go in my car with tonight’s nightly ota, for real world testing.
I’m still trying to get the authentication working for drupal vs mosquitto. Once that is done, I can open up api.openvehicles.com MQTT for public use. If I can’t get it done within the next couple of days, I’ll try another broker (lua scripting, anyone?).
P.S. Commands over MQTT are pretty cool:
<PastedGraphic-2.tiff> <PastedGraphic-3.tiff>
Regards, Mark. _______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
_______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
On Mon, Jul 16, 2018 at 12:47:21PM +0800, Mark Webb-Johnson wrote:
If you have an MQTT client, you can connect to the same server to query the metrics, issue commands, etc. The default topic hierarchy is ovms/<username>/<vehicleid>. I think we are close now. Just need SSL support. And then Apps… Feedback appreciated.
Just had a quick play and it looks really good! Using a simple command-line client (mosquitto_sub from the Debian mosquitto-clients package), I got a nice stream of all metrics and events. Using the free 'MQTT Dash' Android app, it was also trivially easy to set up some pretty state and gauge displays (for SOC, battery, range, door/lock, temperature etc.). Indeed, without writing a line of code, I think I can already use it instead of the V2 app! I did notice some odd things though. Over a period of several hours, there were three occasions where mosquitto_sub showed the topic name truncated to "ovms/caederus/xxx/metr". The values were different each time, and each was a plausible value from a different topic. I'm now running tcpdump to see if the truncation is from car->server or server->client, but of course it hasn't happened since.
On Thu, Jul 19, 2018 at 11:30:18PM +0100, Robin O'Leary wrote:
I did notice some odd things though. Over a period of several hours, there were three occasions where mosquitto_sub showed the topic name truncated to "ovms/caederus/xxx/metr". The values were different each time, and each was a plausible value from a different topic. I'm now running tcpdump to see if the truncation is from car->server or server->client, but of course it hasn't happened since.
Hmm, I wonder if it could be related that the module crashed and rebooted at some point. The web status page says: Last boot was 12425 second(s) ago Time at boot: 2018-07-20 08:48:33 GMT This is reset #1 since last power cycle Detected boot reason: Crash (8/14) Crash counters: 1 total, 0 early Last crash: IllegalInstruction exception on core 0 Registers: PC : 0x00000000 PS : 0x00000000 A0 : 0x00000000 A1 : 0x00000000 A2 : 0x00000000 A3 : 0x00000000 A4 : 0x00000000 A5 : 0x00000000 A6 : 0x00000000 A7 : 0x00000000 A8 : 0x00000000 A9 : 0x00000000 A10 : 0x00000000 A11 : 0x00000000 A12 : 0x00000000 A13 : 0x00000000 A14 : 0x00000000 A15 : 0x00000000 SAR : 0x00000000 EXCCAUSE: 0x00000000 EXCVADDR: 0x00000000 LBEG : 0x00000000 LEND : 0x00000000 LCOUNT : 0x00000000 Backtrace: Version: 3.1.008-62-gcdb67ef/ota_1/ro2 (build idf v3.1-dev-1583-g0fb2019f Jul 19 2018 11:04:42) There doesn't seem to be much useful information there; any suggestions how to track down the cause?
I have seen the same. I think it must be car->server, as it appears in retained messages. Most likely a bug in our modifications to the mongoose library, but I’ve reviewed the code and can’t find it. I will keep looking. Regards, Mark.
On 20 Jul 2018, at 9:14 PM, Robin O'Leary <ovmsdev@caederus.org> wrote:
On Thu, Jul 19, 2018 at 11:30:18PM +0100, Robin O'Leary wrote:
I did notice some odd things though. Over a period of several hours, there were three occasions where mosquitto_sub showed the topic name truncated to "ovms/caederus/xxx/metr". The values were different each time, and each was a plausible value from a different topic. I'm now running tcpdump to see if the truncation is from car->server or server->client, but of course it hasn't happened since.
Hmm, I wonder if it could be related that the module crashed and rebooted at some point. The web status page says:
Last boot was 12425 second(s) ago Time at boot: 2018-07-20 08:48:33 GMT This is reset #1 since last power cycle Detected boot reason: Crash (8/14) Crash counters: 1 total, 0 early Last crash: IllegalInstruction exception on core 0
Registers: PC : 0x00000000 PS : 0x00000000 A0 : 0x00000000 A1 : 0x00000000 A2 : 0x00000000 A3 : 0x00000000 A4 : 0x00000000 A5 : 0x00000000 A6 : 0x00000000 A7 : 0x00000000 A8 : 0x00000000 A9 : 0x00000000 A10 : 0x00000000 A11 : 0x00000000 A12 : 0x00000000 A13 : 0x00000000 A14 : 0x00000000 A15 : 0x00000000 SAR : 0x00000000 EXCCAUSE: 0x00000000 EXCVADDR: 0x00000000 LBEG : 0x00000000 LEND : 0x00000000 LCOUNT : 0x00000000 Backtrace:
Version: 3.1.008-62-gcdb67ef/ota_1/ro2 (build idf v3.1-dev-1583-g0fb2019f Jul 19 2018 11:04:42)
There doesn't seem to be much useful information there; any suggestions how to track down the cause? _______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
I've been using the "MQTT Dash" Android app in preference to the old v2 app for the past month, and it's been great. But one thing I have run in to is that data usage to the v3 server has shot up compared with v2---so much so, that I exceeded the free quota from Hologram (which had been plenty when only using v2). I know I can increase the config value "server.v3.updatetime.idle", which made me wonder what I lose by setting it really high: if no client is connected, is there much point in sending updates at all? I suppose it makes sense if the server is logging it, but is that the case? I found old documentation for getting historical data from the v2 server, but no mention of v3. Maybe a more useful config setting would be a way to limit the update rate depending on the "cost" of the network connection. The simple case would just distinguish wifi or modem; there might also be justification to treat particular SSIDs or GSM providers differently (e.g. treat mobile wifi hotspot as "expensive", or a GSM provider with unlimited data as "cheap"). I then started musing about adding some way to be more selective about which metrics are sent to the server in these cases, which made me realise that the real problem is that the server (or "MQTT broker") is in the wrong place: it should be on the module itself. This would be especially useful for local clients, e.g. phone running MQTT app connected to OVMS in "Access point" mode. Remote clients would have the additional problem of discovering the broker's address, but maybe that could be solved some other way, e.g. DDNS, sshtunnel, VPN. One down side would be that multiple remote clients interested in the same metrics would cause them to be sent multiple times, but maybe that is acceptable for the gain in simplicity. It looks like mongoose can run as a broker (MG_ENABLE_MQTT_BROKER). That's obviously a big change in architecture, which no doubt has implications I haven't though of, but might that be a viable option?
Maybe a more useful config setting would be a way to limit the update rate depending on the "cost" of the network connection. The simple case would just distinguish wifi or modem; there might also be justification to treat particular SSIDs or GSM providers differently (e.g. treat mobile wifi hotspot as "expensive", or a GSM provider with unlimited data as "cheap”).
Seems reasonable, and easy to implement. So long as there is some mechanism to keep the MQTT connection alive, there is no requirement to send data periodically. We do that purely to (a) keep the connection alive, (b) provide historical data for storage at the server, and (c) provide the App with a view of the last known state of the car (even if the car is disconnected).
I then started musing about adding some way to be more selective about which metrics are sent to the server in these cases, which made me realise that the real problem is that the server (or "MQTT broker") is in the wrong place: it should be on the module itself.
While going without a server seems like a good idea (in the early days of OVMS that is what we tried), it has a fatal flaw; on the vast majority of cellular networks (if not all), you don’t get a public IP and certainly don’t get one that is Internet routable inbound. There is no way for the App to connect to the cellular module. For example, this is what I get: OVMS# network Interface#2: pp2 (ifup=1 linkup=1) IPv4: 10.52.40.80/255.255.255.255 gateway 10.64.64.64 A RFC1918 private IP address. There are a bunch of ‘tricks’ around this, involving a broker in the cloud, UDP protocol, and relying on connection tracking idiosyncrasies. That would work for a custom protocol, but not for something like MQTT. Regards, Mark.
On 30 Aug 2018, at 5:57 PM, Robin O'Leary <ovmsdev@caederus.org> wrote:
I've been using the "MQTT Dash" Android app in preference to the old v2 app for the past month, and it's been great. But one thing I have run in to is that data usage to the v3 server has shot up compared with v2---so much so, that I exceeded the free quota from Hologram (which had been plenty when only using v2).
I know I can increase the config value "server.v3.updatetime.idle", which made me wonder what I lose by setting it really high: if no client is connected, is there much point in sending updates at all? I suppose it makes sense if the server is logging it, but is that the case? I found old documentation for getting historical data from the v2 server, but no mention of v3.
Maybe a more useful config setting would be a way to limit the update rate depending on the "cost" of the network connection. The simple case would just distinguish wifi or modem; there might also be justification to treat particular SSIDs or GSM providers differently (e.g. treat mobile wifi hotspot as "expensive", or a GSM provider with unlimited data as "cheap").
I then started musing about adding some way to be more selective about which metrics are sent to the server in these cases, which made me realise that the real problem is that the server (or "MQTT broker") is in the wrong place: it should be on the module itself. This would be especially useful for local clients, e.g. phone running MQTT app connected to OVMS in "Access point" mode. Remote clients would have the additional problem of discovering the broker's address, but maybe that could be solved some other way, e.g. DDNS, sshtunnel, VPN. One down side would be that multiple remote clients interested in the same metrics would cause them to be sent multiple times, but maybe that is acceptable for the gain in simplicity.
It looks like mongoose can run as a broker (MG_ENABLE_MQTT_BROKER). That's obviously a big change in architecture, which no doubt has implications I haven't though of, but might that be a viable option? _______________________________________________ OvmsDev mailing list OvmsDev@lists.openvehicles.com http://lists.openvehicles.com/mailman/listinfo/ovmsdev
participants (4)
-
Jakob Löw -
Mark Webb-Johnson -
Robin O'Leary -
Stephen Casner