Tag: OpenHAB2

MySQL: Moving Data From One Table To Another

Our OpenHAB persistence data is stored in MySQL. There’s an “items” table which correlates each ItemName string to an ItemID integer. There are then Item#### tables that store persistence data for each item. If you rename an item, this means a new table is created and previous persistence data is no longer associated with the item. For some items, that’s fine — I don’t really care when the office light was on last month. But there’s persistence data that we use over a long term — outdoor temperature, luminance, electrical usage. In these cases, we want to pull the old data into the new table. There’s a quick one-liner SQL command that accomplishes this:

INSERT INTO NewTable SELECT * from OldTable;
e.g. INSERT INTO Item3857 SELECT * FROM Item3854;

You can drop the old table too:

DROP OldTable;

But I run a cleanup script against the item list so often don’t bother to remove tables one-off.

OpenHAB CalDav Personal Binding – Item Name Filtering

We use the CalDav Personal binding to select items from our Exchange calendar to populate date/time OpenHAB Items — when does Anya have her next gymnastics class, when is the next Trustee meeting, etc. When we had first set this up, we were using manually created appointments. The appointments were assigned unique categories so the binding could determine which appointment should be used to update the Item. I’ve since started creating calendar items based on published calendars (so far a Google calendar and a SchoolPointe calendar), but the Python module for interacting with Exchange cannot assign a category to the appointments it creates.

The binding allows you to filter on the appointment subject (‘name’) using a regex. The binding documentation says to use name-filter:’\<Some Filter\>’ which … well, doesn’t work. We tried omitting the back-slashes in case they were meant to be escape characters. We tried omitting the greater and less-than symbols in case those were meant in the way I often use them, to designate <the part you replace>. Still doesn’t work. We tried using forward-slashes instead of backslashes because that’s the normal regular expression syntax. Nope. We tried adding a ‘starts with’ ^, trailing .* to ensure it would match anything that started with what we wanted. Nope. We’d alternately match all of the appointments or none.

Consulting the source, there is a very restrictive character set available in your name filter regex. The binding uses Java’s Matcher with a regex to extract your regex from the item configuration. You need to have filter-name: then you may have a single quote (‘? means 0 or 1 of ‘). This is Followed by one or more characters from the class which is all upper and lower case letters A-Z, a full stop, an asterisk, a plus sign, a minus sign, a space, and a pipe bar. Then you may have another single quote. The bit with one or more characters from the restricted class is extracted — this is how the binding gets your regex from the item config.

private static final String REGEX_FILTER_NAME ="filter-name:'?([A-Za-z\\.\\*\\+\\- \\|]+)'?";

Using unsupported characters in your filter-name regex alternately match all appointments or none. Using filter-name:’\<test>\’ (as the documentation literally instructs me to do) doesn’t return a match with anything as + requires one or more matches from the character set. I have zero such characters after the opening single quote. Similarly filter-name:’^Beginning of string.*’ doesn’t return a match. It appears that, in cases where the name filter is null … all items are matched. Explains why we were getting the same appointment’s details posted into each item.

On the other extreme — a filter like filter-name:’Pick up dry-cleaning at 1 Main Street’ will truncate your regular expression at the number character. The extracted matched group is Pick up dry-cleaning at … which won’t match anything unless you actually have an appointment titled “Pick up dry-cleaning at ” with a trailing space. I’ve seen posts on the OpenHAB forum where individuals have non-English words in their match … filter-name:’Trip to Askøy’ which, again, match nothing since the actual regex used by the binding is Trip to Ask  The same thing happens when looking for character classes (i.e. I don’t know if this will be capitalized, so I want to match [Tt]est).

The solution, since a question mark isn’t an option, is to use a plus or splat to replace any character that isn’t supported by the binding. Using a plus ensures there’s something where you expect the character to occur, although the * is a broader match (we use “Township: Event Name”, but I don’t need the colon to successfully match my item. “Township Event Name” would match. I could even use a different delimiter as “Township, Event Name” would also match). Where you are unsure of the case, you need to use a pipebar (e.g. filter-name:’Test|test’)

The Items that are populated with the start time and event name for the next Township meeting look like this:

DateTime Calendar_Upcoming_Township "Upcoming Township meeting (start) [%1$tA, %1$tB %1$te, %1$tY at %1$tl:%1$tM%1$tp]" <calendar> (gCalendar) {caldavPersonal="calendar:ourcalendar type:UPCOMING eventNr:1 value:START filter-name:'Township. .*'"}
String Calendar_Upcoming_Township_Title "Upcoming Township meeting [%s]" <calendar> (gCalendar) {caldavPersonal="calendar:ourcalendar type:UPCOMING eventNr:1 value:NAME filter-name:'Township. .*'"}

And the calendar events titled “Township: Trustee Regular Meeting” or “Township: Craft Fair” are all identified by the filter.

Note: Scott submitted a PR to change the regex used to extract your filter-name regex. Once this change gets merged, you’ll be able to use character sets (e.g. [T|t]est), numbers, and ‘special’ characters excluding the single quote. Including the single quotes around the filter will be required.

Using ZoneMinder v1.32.3 With OpenHAB2

I documented a temporary fix to return ZM_PATH_ZMS and ZM_OPT_FRAME_SERVER through the ./api/configs/view/<KEYNAME>.json API so ZoneMinder 1.31.45 worked with the OpenHAB2 binding. Upon upgrading ZoneMinder to 1.32.3, the binding was no longer able to communicate with our ZoneMinder server.

In the OpenHAB2 log, errors indicated malformed JSON was received.

Caused by: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 7 path $
at com.google.gson.stream.JsonReader.syntaxError(JsonReader.java:1568) ~[?:?]
at com.google.gson.stream.JsonReader.checkLenient(JsonReader.java:1409) ~[?:?]
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:542) ~[?:?]
at com.google.gson.stream.JsonReader.peek(JsonReader.java:425) ~[?:?]
at com.google.gson.JsonParser.parse(JsonParser.java:60) ~[?:?]
at com.google.gson.JsonParser.parse(JsonParser.java:45) ~[?:?]
at name.eskildsen.zoneminder.jetty.JettyConnectionInfo.fetchDataAsJson(JettyConnectionInfo.java:352) ~[?:?]

Using a web browser to access <ZoneMinderURL>/zm/api/configs/view/ZM_PATH_ZMS.json, malformed JSON is returned.

Conf files are not updated when new packages are installed – an conf.rpmnew is created instead. The changes from the new config (zoneminder.conf.rpmnew) file need to be merged into the existing config file (zoneminder.conf). In our zm.conf file, I added:

ZM_DB_SSL_CA_CERT=
ZM_DB_SSL_CLIENT_KEY=
ZM_DB_SSL_CLIENT_CERT=

Reloading the page in my browser confirmed that the JSON response is valid.

When the ZoneMinder binding started, it successfully attached to our monitors and detected a motion alarm.

This does not negate the need for the original fix — config.php still needs to have the strcmp I added. When ZoneMinder is upgraded, /usr/bin/zmupdate.pl is run (I needed to run “/usr/bin/zmupdate.pl -f” to stop zmc from existing with return code 255), the values I added to the ZoneMinder Config table are removed — they need to be re-added.

 

Quick OpenHAB2 Apt Install In Docker Ubuntu Container

# Set up docker image — exposes OpenHAB web on your port 8080
docker run -p 8080:8080 -dit –name UbuntuOH2 ubuntu:latest

# Shell into the container
docker exec -it UbuntuOH2 /bin/bash

# From within the container, run:
apt update
apt install sudo
apt install vim
apt install wget
apt install gnupg
apt install apt-transport-https

# Repo for Zulu Java
echo ‘deb http://repos.azulsystems.com/debian stable main’ > /etc/apt/sources.list.d/zulu.list

# Repo for OpenHAB2 stable build
wget -qO – ‘https://bintray.com/user/downloadSubjectPublicKey?username=openhab’ | apt-key add –
apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv-keys 0xB1998361219BD9C9
echo ‘deb https://dl.bintray.com/openhab/apt-repo2 stable main’ | tee /etc/apt/sources.list.d/openhab2.list

apt-get update
apt-get install zulu-8
apt-get install openhab2
apt-get install openhab2-addons

/etc/init.d/openhab2 start

# OpenHAB will be accessible on your IP at 8080. E.g. http://10.10.10.123:8080.
# docker start/stop UbuntuOH2

Temporary Fix: ZoneMinder, PHP7.2, openHAB ZoneMinder Binding

I got Zoneminder 1.31.45 (which includes the new CakePHP framework that doesn’t use what have become reserved words in PHP7) working with the openHAB ZoneMinder binding (which relies on data from the API at  /zm/api/configs/view/ATTR_NAME.json). There are two options, ZM_PATH_ZMS and ZM_OPT_FRAME_SERVER which now return bad parameter errors when attempting to retrieve the config using /view/. Looking through the database update scripts, it appears both of these parameters were removed at ZoneMinder 1.31.1

ZM_PATH_ZMS was removed from the Config database and placed in a config file, /etc/zm/conf.d/01-system-paths.conf. There is a PR to “munge” this value into the API so /viewByName returns its value … but that doesn’t expose it through /view.

ZM_OPT_FRAME_SERVER appears to have been eliminated as a configuration option.

You cannot simply re-insert the config options into the database, as ZoneMinder itself loads the ZM_PATH_ZMS value from the config file and then proceeds to use it. When it attempts to load config parameters from the Config table and encounters a duplicate … it falls over. We were unable to view our video through the ZoneMinder server.

*But* editing /usr/share/zoneminder/www/includes/config.php (exact path may vary, list the files from your package install and find the config.php in www/includes) to include an if clause around the section that loads config parameters from the database, and only loading the parameter when the Name is not ZM_PATH_ZMS (bit in yellow below) avoids this overlapping config value.

$result = $dbConn->query( 'select * from Config order by Id asc' );
if ( !$result )
   echo mysql_error();
   $monitors = array();
   while( $row = dbFetchNext( $result ) ) {
      if ( $defineConsts )
      // LJR 2018-08-18 I inserted this config parameter into DB to get OH2-ZM running, and need to ignore it in the ZM web code
      if( strcmp($row['Name'],'ZM_PATH_ZMS') != 0){
         define( $row['Name'], $row['Value'] );
      }
   $config[$row['Name']] = $row;
   if ( !($configCat = &$configCats[$row['Category']]) ) {
      $configCats[$row['Category']] = array();
      $configCat = &$configCats[$row['Category']];
   }
   $configCat[$row['Name']] = $row;
}

Once the ZoneMinder web site happily ignores the presence of ZM_PATH_ZMS from the database config table, you can insert it and ZM_OPT_FRAME_SERVER (an option which appears to have been removed at ZoneMinder 1.31.1) back into the Config table. **Important** — change the actual value of ZM_PATH_ZMS to whatever is appropriate for your installation. In my ZoneMinder installation, /cgi-bin-zm is the cgi-bin directory, and /cgi-bin-zm/nph-zms is the ZMS binary.

From a MySQL command line:

use zm; #Assuming your zoneminder database is actually named zm
INSERT INTO `Config` VALUES (225,'ZM_PATH_ZMS','/cgi-bin-zm/nph-zms','string','/cgi-bin-zm/nph-zms','relative/path/to/somewhere','(?^:^((?:[^/].*)?)/?$)',' $1 ','Web path to zms streaming server',' The ZoneMinder streaming server is required to send streamed images to your browser. It will be installed into the cgi-bin path given at configuration time. This option determines what the web path to the server is rather than the local path on your machine. Ordinarily the streaming server runs in parser-header mode however if you experience problems with streaming you can change this to non-parsed-header (nph) mode by changing \'zms\' to \'nph-zms\'. ','hidden',0,NULL);
INSERT INTO `Config` VALUES (226,'ZM_OPT_FRAME_SERVER','0','boolean','no','yes|no','(?^i:^([yn]))',' ($1 =~ /^y/) ? \"yes\" : \"no\" ','Should analysis farm out the writing of images to disk',' In some circumstances it is possible for a slow disk to take so long writing images to disk that it causes the analysis daemon to fall behind especially during high frame rate events. Setting this option to yes enables a frame server daemon (zmf) which will be sent the images from the analysis daemon and will do the actual writing of images itself freeing up the analysis daemon to get on with other things. Should this transmission fail or other permanent or transient error occur, this function will fall back to the analysis daemon. ','system',0,NULL);

Now restart ZoneMinder and the OH2 ZoneMinder binding. We’ve got monitors on the ZoneMinder web site, we are able to view the video stream, and OH2 picks up alarms from the ZoneMinder server.

If you re-run zmupdate.pl, it will remove these two records from the Config table. If you upgrade ZoneMinder, the change to the PHP file will be reverted.

openHAB With Custom Built Serial Binding – fix to locking permission issue

When we updated our openHAB server to Fedora 28 and changed to a non-root user, the openhab user was unable to create lock files in /run/lock. As an interim fix, we just changed the permission on the lock folder to allow the openhab account to create files. As a more elegant solution, I’ve built the nrjavaserial JAR file from the source in NeuronRobotics’ repository.

The process to build and use a JAR built from this source follows. Before attempting to build the nrjavaserial jar from source, ensure you have gradle (which will install a LOT of additional packages), lockdev, lockdev-devel, some jdk, and some jdk-devel (I used java-1.8.0-openjdk-1.8.0.181-7.b13.fc28.x86_64 and java-1.8.0-openjdk-devel-1.8.0.181-7.b13.fc28.x86_64 because they were already installed for other projects).

# Set ossrhUsername and ossrhPassword values for the account used to build the project – username and password can be null
[lisa@server ~]# cat ~/.gradle/gradle.properties
ossrhUsername=
ossrhPassword=

# Grab the source
[lisa@server ~]# git clone https://github.com/NeuronRobotics/nrjavaserial.git

# Build the project
[lisa@server ~]# cd nrjavaserial
[lisa@server nrjavaserial]# make linux64 # assuming you’ve got 64-bit linux

# Voila, a jar file
[lisa@server nrjavaserial]# cd build/libs
[lisa@server libs]# ll
total 852
-rw-r–r– 1 root root 611694 Aug 16 10:08 nrjavaserial-3.14.0.jar
-rw-r–r– 1 root root 170546 Aug 16 10:08 nrjavaserial-3.14.0-javadoc.jar
-rw-r–r– 1 root root 85833 Aug 16 10:08 nrjavaserial-3.14.0-sources.jar

Before installing the newly built nrjavaserial-3.14.0.jar into openHAB, ensure you have lockdev installed on your Fedora machine and add your openhab user account to the lock group.

# Verify the lockdev folder was created
[lisa@server ~]# ll /run/lock/
total 4
-rw-r–r– 1 root root 22 Aug 10 15:35 asound.state.lock
drwx—— 2 root root 60 Aug 10 15:30 iscsi
drwxrwxr-x 2 root lock 140 Aug 16 12:19 lockdev
drwx—— 2 root root 40 Aug 10 15:30 lvm
drwxr-xr-x 2 root root 40 Aug 10 15:30 ppp
drwxr-xr-x 2 root root 40 Aug 10 15:30 subsys
# Add the openhab user to the lock group
[lisa@server ~]# usermod -a -G lock openhab

The openhab user account can now write to the /run/lock/lockdev folder. Install the new jar file into openHAB. When you restart openHAB, verify lock files are created as expected.
[lisa@server ~]# ll /run/lock/lockdev/
total 20
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LCK…31525
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LCK..ttyUSB-5
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LCK..ttyUSB-55
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LK.000.188.000
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LK.000.188.001

 

Zoneminder Snapshot With openHAB Binding

When we upgraded to Fedora 28 on our server, ZoneMinder ceased working because some CakePHP function names could no longer be used. To resolve the issue, I ended up running a snapshot build of ZoneMinder that included a newer build of CakePHP. Version 1.31.45 instead of 1.30.4-7 on the repository.

All of our cameras showed up, and although the ZoneMinder folks seem to have a bug in their SQL query when building out the table of event counts on the main page (that is, all of my monitors have blank instead of event counts and my apache log is filled with

[Wed Aug 15 12:08:37.152933 2018] [php7:notice] [pid 32496] [client 10.5.5.234:14705] ERR [SQL-ERR 'SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'and E.MonitorId = '13' ),1,NULL)) as EventCount1, count(if(1 and (  and E.Monito' at line 1', statement was 'select count(if(1 and ( E.MonitorId = '13' ),1,NULL)) as EventCount0, count(if(1 and (  and E.MonitorId = '13' ),1,NULL)) as EventCount1, count(if(1 and (  and E.MonitorId = '13' ),1,NULL)) as EventCount2, count(if(1 and (  and E.MonitorId = '13' ),1,NULL)) as EventCount3, count(if(1 and (  and E.MonitorId = '13' ),1,NULL)) as EventCount4, count(if(1 and (  and E.MonitorId = '13' ),1,NULL)) as EventCount5 from Events as E where MonitorId = ?' params:13]

… it works.

Until Scott checked openHAB, where all of the items are offline. Apparently the openHAB ZoneMinder binding is using the cgi-bin stuff to get the value of ZM_PATH_ZMS. A config option which was removed from the database as part of the upgrade process.

Upgrading database to version 1.31.1
Loading config from DBNo option 'ZM_DIR_EVENTS' found, removing.
No option 'ZM_DIR_IMAGES' found, removing.
No option 'ZM_DIR_SOUNDS' found, removing.
No option 'ZM_FRAME_SOCKET_SIZE' found, removing.
No option 'ZM_OPT_FRAME_SERVER' found, removing.
No option 'ZM_PATH_ARP' found, removing.
No option 'ZM_PATH_LOGS' found, removing.
No option 'ZM_PATH_MAP' found, removing.
No option 'ZM_PATH_SOCKS' found, removing.
No option 'ZM_PATH_SWAP' found, removing.
No option 'ZM_PATH_ZMS' found, removing.
 207 entries
Saving config to DB 207 entries
Upgrading DB to 1.30.4 from 1.30.3

The calls from openHAB yield 404 errors in the access_log

10.0.0.5 - - [15/Aug/2018:09:38:04 -0400] "GET /zm/api/configs/view/ZM_PATH_ZMS.json HTTP/1.1" 404 1751 "-" "Jetty/9.3.21.v20170918"

 

Unfortunately they’ve changed the URL to get these values — it’s “munged” from the config file as the parameters are no longer stored to the Config table.
http://zoneminder.domain.ccTLD/zm/api/configs/view/ZM_PATH_ZMS.json
is now
http://zoneminder.domain.ccTLD/zm/api/configs/viewByName/ZM_PATH_ZMS.json

So … that’s a problem!

Running OpenHAB2 As Non-Root User — With USB

I’ll prefix this saga with the fact my sad story is implementation specific (i.e. relevant to those using Fedora, RHEL, or CentOS). I know Ubuntu has its own history with handling locks, and I’m sure other distros do as well. But I don’t know the history there, nor do I know how they currently manage locking.

We switched our openHAB installation to use a systemd unit file to run as a service and changed the execution to a non-root user. Since we knew the openhab service account needed to be a member of dialout and tty, and we’d set the account up properly, we expected everything would work beautifully.

Aaaand … neither ZWave for ZigBee came online. Not because it couldn’t access the USB devices, but because the non-root user could not lock the USB devices. From journalctl, we see LOTS of error messages that are not reflected in openHAB:

-- Logs begin at Sun 2017-04-30 14:28:12 EDT, end at Sun 2018-08-12 19:10:32 EDT. --
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyUSB-55: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: [34B blob data]
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyUSB-5: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: [34B blob data]
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyUSB1: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: [34B blob data]
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyUSB0: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: [34B blob data]
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyS31: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: [34B blob data]
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyS30: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: testRead() Lock file failed
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyS29: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: testRead() Lock file failed
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyS28: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: testRead() Lock file failed
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyS27: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: testRead() Lock file failed
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyS26: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: testRead() Lock file failed
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyS25: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: testRead() Lock file failed
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: check_group_uucp(): error testing lock file creation Error details:Permission deniedcheck_lock_status: No permission to create lock fi>
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: RXTX fhs_lock() Error: opening lock file: /var/lock/LCK..ttyS24: Permission denied. FAILED TO OPEN: No such file or directory
Aug 12 18:36:19 server.domain.ccTLD start.sh[7448]: testRead() Lock file failed

And now my old-school Linux/Unix knowledge totally screws me over — I expected a uucp group with write access to /run/lock. Except … there’s no such group. Evidently in RHEL 7.2, they started using a group named lock with permission to /var/lock to differentiate between serial devices (owned by uucp) and lock files. Nice bit of history, that, but Fedora and RedHat don’t do that anymore either.

Having a group with write permission was deemed a latent privilege escalation vulnerability, and they played around with having a lockdev binary writing files to /run/lock/lockdev, the creation and configuration of lockdev was moved into systemd, and then removed from systemd in favor of approaches [flock(), for instance].

RXTX has a hard-coded path based on OS version — that is what is used to create the lock file. And as the /run/lock folder is writable only by the owner, root … that is what is failing.

#if defined(__linux__)
/*
	This is a small hack to get mark and space parity working on older systems
	https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=147533
*/
#	if !defined(CMSPAR)
#		define CMSPAR 010000000000
#	endif /* CMSPAR */
#	
#	define DEVICEDIR "/dev/"
#	define LOCKDIR "/var/lock"
#	define LOCKFILEPREFIX "LCK.."
#	define FHS
#endif /* __linux__ */

Which is odd because I see a few threads about how nrjavaserial has been updated and as soon as the newer nrjavaserial gets bundled into the application, locking will all be sorted. And there’s an open issue for exactly the problem we are having … which explains why I’m not seeing something different in their source code. Digging around more, it looks like they didn’t actually change the hardcoded paths but rather added support for liblockdev. Which prompted my hypothesis that simply installing the lockdev package would magically sort the issue. It did not.

In the interim, though, we can just add write permission for /run/lock thorough the config file /usr/lib/tmpfiles.d/legacy.conf — the distro creates the lock directory owned by root:root. Original config lines:

d /run/lock 0755 root root -
L /var/lock - - - - ../run/lock

We can create the folder as owned by the lock group group and add group write permissions (realizing that creates the potential for privilege escalation attacks). Updated config lines:

#d /run/lock 0755 root root -
d /run/lock 0775 root lock -
L /var/lock - - - - ../run/lock

Adding the openhab account to the lock group allows the LCK.. files to be created.

[lisa@server run]# usermod -a -G lock openhab
[lisa@server run]# id openhab
uid=964(openhab) gid=963(openhab) groups=963(openhab),5(tty),18(dialout),54(lock)

Either reboot to reprocess legacy.conf or manually change the ownership & permissions on /run/lock. Either way, confirm that the changes are successful.

[lisa@server run]# chown root:lock /run/lock
[lisa@server run]# chmod g+w lock
[lisa@server lock]# ll /run | grep lock
drwxrwxr-x  7 root           lock             200 Aug 13 14:03 lock

If you manually set the permissions, restart openHAB. Our devices are online, and we have lock files:

[lisa@seerver lock]# ll
total 12
-rw-r--r-- 1 root root 22 Aug 10 15:35 asound.state.lock
drwx------ 2 root root 60 Aug 10 15:30 iscsi
-rw-r--r-- 1 openhab openhab 11 Aug 13 14:03 LCK..ttyUSB-5
-rw-r--r-- 1 openhab openhab 11 Aug 13 14:03 LCK..ttyUSB-55
drwxrwxr-x 2 root lock 40 Aug 10 15:30 lockdev
drwx------ 2 root root 40 Aug 10 15:30 lvm
drwxr-xr-x 2 root root 40 Aug 10 15:30 ppp
drwxr-xr-x 2 root root 40 Aug 10 15:30 subsys

 

Creating An OpenHAB 2.3.0 Snapshot Docker Container

We found quick instructions for creating a Docker container for the OpenHAB 2.3.0 snapshot. These instructions evidently presuppose some basic knowledge of building Docker containers, so I thought I’d write the “I don’t know what I am doing” version of the instructions. Beyond the obvious download & install Docker, then make sure it’s functional (service starts).

The linked Dockerfile is not the only thing you need. Go up a level — you need both the Dockerfile and entrypoint.sh files. Create a directory somewhere and grab these two files. Then build the container using

docker build -t oh2imagename .

I used a short, alpha-numeric only name for my image. When I used slashes as in the example, the container would not start. Then make the folders you want to map into OpenHAB2:

mkdir /some/path/to/openhab/addons
mkdir /some/path/to/openhab/conf
mkdir /some/path/to/openhab/userdata

The instructions conflate local users/groups with in-container users/groups. You do not need to create a local user. You do need to indicate the uidNumber and gidNumber for the openhab user and group. Even if you do create the local user and group, then change the /some/path/to/openhab permissions to provide full access to the user … you may well not be able to access the files. That is SELinux, not a file permission issue. The quick/dirty solution is to start the container with the privileged flag:

--privileged=true

Alternately, consult the Universal Archive of All IT Knowledge and figure out how to allow the docker service to write files where you want them. And how to access USB devices if you are trying to use something like a ZWave dongle. We went with the privileged route 🙂 The –name option is just the container name. The –net uses the host network for container communications instead of the bridge network. Saves mapping ports, although you could easily use the bridge network and map out the handful of OpenHab specific ports. The -d runs the container in detached mode. The -e sets some environment flags (used by the user/group creation script that runs upon container startup). The –tty (or -t) attaches a console. Not really used here.

docker run --privileged --name oh2containername --net=host --tty -d -e USER_ID=5555 \
 -e GROUP_ID=5555 oh2imagename

Ideally, your OpenHAB2 instance will be running. Use “docker ps” to list out the running containers. If you don’t see a container with the name supplied above … then something went wrong. You can use “docker history oh2containername” to view a quick history, but “docker logs oh2containername” will probably provide more useful information. We encountered file permission issues (as noted above, due to SELinux) which prevented the initial container setup from running. Once that was sorted, the container showed up in the running container list.

You’re ready to use it — you can access the web console using your computer’s IP address (assuming you set this container up in the host network and not the bridge — if you used the bridge, you can use “docker inspect oh2containername” and look for IPAddress under NetworkSettings) on the default port. You can ssh into the Karaf console with the default user/password on the default port. Or you can shell into the container.

docker exec -it oh2containername /bin/bash

This is a bash shell running on the OH2 container — you’ll find a lot of ‘stuff’ hasn’t been installed, and your normal command aliases won’t be present. But it’s a shell on the server and can be used to start/stop OH2.

Logging OpenHAB’s Karaf Console To A File

With OpenHAB2, there is a console where information is displayed. You can copy/paste from the console to save information, but if you are reproducing an issue and expect something to be logged, you can also dump the information from the console into a text file. This is done by ssh’ing into the Karaf console and using tee to write output to a file. Since the SSH server is bound to 127.0.0.1, you will need to use localhost or 127.0.0.1. This cannot be done remotely without some sort of firewall port redirection or OpenHAB change

     ssh UserName@localhost -p 8101 | tee -a /tmp/test.txt

So what’s the username? Karaf uses karaf as the username and password. OpenHAB uses the users.properties file (./openhab2/userdata/etc) to store users. Our file has the user openhab. You can google the default password or put your own crypt string in there and know the password.

Now everything that comes across the Karaf console (system output and stuff you type) will be in the /tmp/test.txt file.

[root@fedora01 ~]# tail -f /tmp/test.txt

                          __  _____    ____
  ____  ____  ___  ____  / / / /   |  / __ )
 / __ \/ __ \/ _ \/ __ \/ /_/ / /| | / __  |
/ /_/ / /_/ /  __/ / / / __  / ___ |/ /_/ /
\____/ .___/\___/_/ /_/_/ /_/_/  |_/_____/
    /_/                        2.2.0-SNAPSHOT
                               Build #1114

Hit '' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '' or type 'system:shutdown' or 'logout' to shutdown openHAB.

openhab> bundle:list
START LEVEL 100 , List Threshold: 50
 ID │ State    │ Lvl │ Version                │ Name
────┼──────────┼─────┼────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
──────────────────────────────────────────────────────────────────────────────────
 15 │ Active   │  80 │ 2.2.0.201712061711     │ ZWave Binding
 16 │ Active   │  80 │ 2.2.0.201712052342     │ ZigBee Binding
 17 │ Active   │  80 │ 5.3.1.201602281253     │ OSGi JAX-RS Connector
 18 │ Active   │  80 │ 2.4.5                  │ Jackson-annotations
 19 │ Active   │  80 │ 2.4.5                  │ Jackson-core
 20 │ Active   │  80 │ 2.4.5                  │ jackson-databind
 21 │ Active   │  80 │ 2.4.5                  │ Jackson-dataformat-XML
 22 │ Active   │  80 │ 2.4.5                  │ Jackson-dataformat-YAML
 23 │ Active   │  80 │ 2.4.5                  │ Jackson-module-JAXB-annotations
 24 │ Active   │  80 │ 2.7.0                  │ Gson
 25 │ Active   │  80 │ 18.0.0                 │ Guava: Google Core Libraries for Java
 26 │ Active   │  80 │ 3.0.0.v201312141243    │ Google Guice (No AOP)
 27 │ Active   │  80 │ 3.12.0.OH              │ nrjavaserial
 28 │ Active   │  80 │ 1.5.8                  │ swagger-annotations
 29 │ Active   │  80 │ 3.19.0.GA              │ Javassist
 31 │ Active   │  80 │ 3.5.2                  │ JmDNS
 34 │ Active   │  80 │ 1.1.0.Final            │ Bean Validation API
 36 │ Active   │  80 │ 2.0.1                  │ javax.ws.rs-api