What Is LoRaWAN?

Confused about why it’s better than Zigbee? Let us help…

What is LoRaWAN and why is it “better” than Zigbee?

Even long-time IoT enthusiasts struggle with the wealth of technologies that are on offer these days. One of the most confusing phenomena for someone who isn’t a RF engineer is the scale and range of LoRaWAN. If you’ve been in the game for a while, you may have used a ZigBee radio module for wireless data transmission in your own projects. ZigBee-compliant modules had become a gold standard for many industrial applications in the 2000s, featuring >10m range (it was said to be 100m, but that was hardly ever achieved), up to hundreds of kbit/second transfer rate (depending on the model and radio band used) and message encryption by default. Over most cheap proprietary RFM22 transceivers, ZigBee also offered an industry standard following the IEEE 802.15.4 specification for mesh networking. This allowed ZigBee devices to forward messages from one to another, extending the effective range of the network. Despite their rich features, ZigBee devices are limited in range and limiting when it comes to their power consumption and the potential use in IoT application. And this is where LoRaWAN comes into play: It’s a Low-Power Wide Area Network (LPWAN) standard promising a reach of tens of kilometres for line-of-sight connections and aiming to provide battery lives of up to ten years. How can this work?

First, let’s contrast short-range radio standards like the ZigBee with the LPWAN standards like LoRaWAN. RFM22, ZigBee and LPWAN all use radio frequencies in the ultra high frequency (UHF) range. Following the ITU 9 classification, these are devices that use a carrier frequency of 300 MHz to 3 GHz. That is, the radio waves have a peak-to-peak distance of 10-100 cm — a tiny proportion of the electromagnetic spectrum. Here, we find television broadcasts, mobile phone communication, 2.4 GHz WiFi, Bluetooth, and various proprietary radio standards. We all know that television broadcasting transmitters have a significant range, but clearly that’s because they can pack some punch behind the signal. There must be another reason that LoRaWAN does better than the other radio standards. The carrier frequency itself can therefore not explain the range of LPWAN standards.

There is all sorts of hardware trickery that can be applied to radio signals. Rather than allowing those electromagnetic waves orientate randomly on their way to the receiver, various polarisation strategies can increase range. A circular-polarised wave that drills itself forward can often more easily penetrate obstacles, whereas linear-polarised signals stay in one plane when progressing towards the receiver, concentrating the signal rather than dispersing it in different directions of the beam. However, these methods require effort and preparation on both the sender and receiver side, and wouldn’t really lend themselves to IoT field deployment…

The secret sauce of LPWAN is the modulation of the signal. Modulation describes how information is encoded in a signal. From radio broadcasting stations you may remember ‘AM’ or ‘FM’, amplitude or frequency modulation. That’s how the carrier signal is changed in order to express certain sounds. AM/FM are analog modulation techniques and digital modulation interprets changes like phase shifts in the signal as binary toggle. LPWAN standards are using a third set of methods, spectrum modulation, all of which get away with very low, noisy input signals. So as the key function of LPWAN chipsets is the demodulation and interpretation of very faint signals, one could think of a LoRaWAN radio as a pimped ZigBee module. That’s crazy, isn’t it? To understand a little more in detail how one of the LPWAN standards works, in the following we are going to focus on LoRaWAN as it is really ‘the network of the people’ and because The Things Network -a world-wide movement of idealists who install and run LoRaWAN gateways- supports our idea of open data.

LoRaWAN uses a modulation method called Chirp Spread Spectrum (CSS). Spread spectrum methods contrast narrow band radio as ‘they do not put all of their eggs into the same basket’. Consider a radio station that transmits its frequency-modulated programme with high power at one particular frequency, e.g. 89.9 MHz (the carrier is 89.9 MHz with modulations of about 50 kHz to encode the music). If you get to receive that signal, that’s good, but if there is a concurrent station sending their programme over the same frequency, your favourite station may get jammed. With spread spectrum, the message gets sent over a wide frequency range, but even if that signal is just above background noise, it is difficult to deliberately or accidentally destroy the message in its entirety. The ‘chirp’ refers to a particular technique that continuously increases or decreases the frequency while a particular payload is being sent.

The enormous sensitivity and therefore reach of LoRaWAN end devices and gateways has a price: throughput. While the effective range of LoRaWAN is significantly higher than ZigBee, the transmitted data rate of 0.25 to 12.5 kbit/s (depending on the local frequency standard and so-called spreading factor) is a minute fraction of it – but, hey, your connected dishwasher doesn’t have to watch Netflix, and a payload of 11-242 bytes (again, depending on your local frequency standard etc) is ample for occasional status updates. Here is where the so-called spreading factor comes into play. If your signal-to-noise ratio is great (close proximity, no concurrent signals, etc), you can send your ‘chirp’ within a small frequency range. If you need to compensate for a bad signal-to-noise ratio, it’s better to stretch that ‘chirp’ over a larger range of frequencies. However, that requires smaller payloads per ‘chirp’ and a drop in data rate.

Power consumption, reach and throughput are all linked. To burst out a narrow transmission consumes more power than to emit a spread signal. Hence, LoRaWAN implements an adaptable data rate that can take into account the signal-to-noise ratio as well as the power status of a device.

Taking the Air at the Turk’s Head

OpenSensors are pioneers in open data and IoT. Here’s an example of how we work….

5258374476_2ce1b88c66_b

OpenSensors are pioneers in open data and the internet of things, surfacing a wide range of data sets for open analysis. As an open data aggregator we deliver content over a common infrastructure; whether air quality or transport data, you only have to think about one integration point. Future cities need low data transaction costs for friction free operation, bridging technical gaps slow progress, so keeping the number of integration points low makes sense everybody.

Our journey starts here, as we build out our open data content expect to see more stories, more insight and hopefully some catalysts for positive change.

Before our first story, consider what will make open data and the Internet of things useful.

We must bridge the gap from data to information, allow consumers to abstract away the complexity of IoT to ask questions that makes sense to them.

Take data from the London Air Quality network (LAQN), the network is sparse so it’s improbable our need maps directly to a sensor. By coupling some simple python code with OpenSensors data we’ll mash some LAQN data together to get some insight about air quality in wapping.

In this story I’ll show how we can bridge the information gap with some simple code, yielding valuable insight along the way!

Chapter 1: OpenSensors.io Primer

First a quick primer on how data is structured in OpenSensors.io (for more detail check out our forum and glossary of terms)

  • Devices – Each connected ‘thing’ maps to a secured device, things map one-to-one to a device
  • Topics – Data is published by devices to topics, a topic is a URI and is the pointer to a stream of data
  • Organisations (orgs) – An organisation owns many topics and is the route of an orgs topic URI
  • Payloads – Payloads are the string content of messages sent to topic URI’s, typically JSON

Also check out our RESTful and streaming APIs on the website for more background and online examples.

Chapter 2: Putting JSON to Work

You can use the OpenSensors REST API to gather data for research, but it comes in chunks of JSON which isn’t great for data science. For convenience i wrapped up some common data sources for London into a python class. Since IoT data is rarely in a nice columnar form it’s valuable to build some simple functions to shape the data into something a bit more useful.

Chapter 3: Introducing the Turks Head

I’m fortunate to spend a lot of time in Wapping, in and around the community of the Turk’s Head Workspace and Cafe, but unfortunately we don’t have a local LAQN sensor. With a bit of data science and OpenSensors.io open data we can estimate what NO2 levels might be around the cafe and workspace.

A simple way to estimate NO2 is a weighted average of all the LAQN sensors, in this case we derive the weights from the distance between the sensor and our location. Since we want to overweight the closest sensors we can use an exponential decay to deflate towards zero for those far away.

For the Turks Head sensors in Aldgate, Southwark and Tower hamlets and the City are the closest and have the biggest impact on our estimate.

Chapter 4: Getting into the Data

With our air quality time series, and our weights we can dig into what our estimates for the Turks Head look like (NO2 * weight). Here’s the series for NO2 over the last 20 days, it looks like the peaks and troughs repeat, and the falling or rising trend is persistent in between.

Trend followers in finance use moving averages to identify trends, for example the MACD indicator (moving average convergence divergence). MACD uses the delta between a fast and slow moving average to identify rising or falling trends, we’ll do the same. For our purposes we’ll speed the averages up using a decay of 3 and 6 periods (LAQN data is hourly and we are resampling to give estimates on the hour).

What can we conclude from the charts for The Turks Head? From the left hand chart we can see the data is little noisy, with a flat line showing some missing or ‘stalled’ data. Looking at the 3 and 6 period decayed averages the data is smoother, with the faster average persistently trending ahead of the slower one.

Even with fast moving decays the averages cross only a couple of times a day, showing persistence when in trend. So using a simple trend indicator and the LAQN we can build a simple air barometer for the Turks Head.

Good 3 period exp average < 6 period average (green) Bad 3 period exp average > 6 period average (red)

This is helpful because, given a persistent trend state, where we have a ‘good’ air now, we’ll probably have ‘good’ air for the following hour.

Chapter 5: What’s the trend across London?

So we now have means of defining how NO2 levels at the Turk’s Head are trending, but is the trend state predictable over a 24 hour period?

Remember we define good or bad air quality trend as:

Good ‘fast’ average < ‘slow’ average = falling NO2 Bad ‘fast’ average > ‘slow’ average = rising NO2

If we aggregate data into hourly buckets we can visualise how much of the time, over the past 20 days, a sensor has been in a up trend (‘good’) for a given hour.

x = hour of the day y = percentage of bucket that is in a ‘good’ state

We can see that for each 1 hour bucket (24 in total) there is a city wide pattern; if we aggregate across the city (using the same measure, the percentage of sensors in up or down trend) we get an idea of how NO2 trends over a typical day.

Our right hand chart shows the percentage of ‘good’ versus ‘bad’ NO2 sensor states across London over the past 20 days (collected from about 80 sensors over 20 days)

Now this is a really simple analysis but it suggests the proportion of ‘good’ trends across London is high before 7am, and then falls away dramatically during the morning commute. No surprises there.

But the pattern isn’t symmetrical; after peaking around lunchtime, when only ~20% of the cities sensors having improving NO2, NO2 falls throughout the afternoon. From a behavioural standpoint this makes sense; there is a more concentrated morning commute relative to the evening. Most of us arrive at the workplace between say 8 and 9am, but in the evening we may go to the gym, we may go out for dinner, or just work late. The dispersion of our exits from the city is wider than when we enter.

Chapter 6 – PM versus NO2

So we have considered NO2 as our core measure, in part because there are more sensors in the LAQN delivering this data than particulates. But let’s consider particulates for a moment, LAQN deliver PM10 and PM2.5 measures, the definition can be found here.

Our temporal curves for particles differ from NO2 taking longer to disperse during the evening rush hour (remember we are measuring percentage of sensors in a ‘good’ state). As a measure of air quality NO2 builds up faster, and decays faster once peak traffic flows have completed, whereas particles linger only fading deep into the night (on average).

Closing Thoughts

In our data set, NO2 and PM measures differ in their average behaviour over a typical 24 hour period.

  • Behavioural interventions will need to consider whether particulates or N02 are the most impactful.
  • How can we communicate air quality to our citizens, and relate their personal needs to the measures most impactful on their lives?
  • Do we need additional sensors to create a more dense air quality resource? How can we allocate funds to optimally support network expansion and air quality services?
  • Knowing the characteristics of a sensor (location, calibration, situation (elevated, kerb side, A or B road) will improve estimates, how can we deliver this meta-data?

Plenty of food for thought…………..information

Notes and Resources

Our stories are quick and dirty demonstrators to promote innovation and should be treated as such. All data science and statistics should be used responsibly 🙂

All of the code supporting this can be found on github with data sourced from opensensors’ LAQN feed, and I use a postcode lookup to get long/lat locations for wapping. I’ve also taken some inspiration from https://github.com/e-dard/boris and https://github.com/tzano/OpenSensors.io-Py so thanks for their contribution!

http://www.londonair.org.uk/ https://www.opensensors.io/

The Path to Smart Buildings

The need for good, informed design within buildings

Google ‘principles of good architectural design’ and you’ll get links to technology, to buildings and all manner of other services. But it’s hard to find principles of design for the tech services that facilitate smart buildings. Let’s remind ourselves what a smart building is with the help of sustainable tech forum; ‘The simple answer is that there’s automation involved somehow that makes managing and operating buildings more efficient’. So the need is well documented but we want to bridge to the ‘practice of designing and constructing buildings’, after all that’s what architecture is about.

OpenSensors hosted its first Smart Building Exchange (SBeX) event in September, and we are grateful to the panelists and attendees who made it such a success. Our goal was to bridge the gap between widely documented features of smart buildings and the tech that underpins it. Through our workshops we decomposed tenant needs and identified services to support them using the value proposition canvas. We borrow from lean product design principles since building operators need to rapidly innovate using processes inherited from startups. Mapping the pains and gains of users to the features and products of the tech stack revealed a common theme, data infrastructure. Data is the new commodity that new services will be built upon, some will be open and others private, but data will be the currency of the next generation smart building.

Take integrated facilities management (IFM) where data serves the desire to deliver better UX at a lower cost with fewer outages. IFM has pivoted from a set of siloed software services to a set of application services overlaid upon a horizontal data infrastructure. For example:

  • Data science services will develop to identify ‘rogue’ devices operating outside expected patterns, they will identify assets that need inspection or replacement and schedule maintenance works using time and cost optimisation routines.
  • Digital concierge services will use personal devices, location based technology and corporate data (calendar and HR data) to optimise both user experience and spacial allocation.

So can we identify a tech architecture to support this pivot from monolithic apps? Data services facilitated by a central messaging backbone allows the complexity of building services to be broken down and tackled one service at a time, lowering the risk failure and allowing agile iterations at a reduced cost. Take the pillars of data driven applications for IFM as identified by our workshop group; predictive/reactive alerting and tactical/strategic reporting, how might we go about servicing these needs? Consider how the path to smart buildings outlined below could help build an IFM product.

  • Build the value proposition founded on a clear vision of what your users want.
  • Identify the data that will drive your smart building product including open data
  • Identify the sensors needed to gather your data, they could be mobile devices or occupancy sensors
  • Identify connectivity from the sensor to your data infrastructure, this might be radio to IP connected gateways or directly onto the local network via POE (power over ethernet)
  • Structure your message payloads and commit to schemas to deliver repeatable processes for message parsing and routing within your building
  • Configure your events turning your data into information using rules based platforms for IoT such as node red
  • Build widgets and data services that can be bound together for dashboarding. By identifying common user needs across the enterprise we can operate a leaner system stack
  • Build user portals and dashboards using your common data services and components
  • Validate tenant user experience through surveys and modelling tenant behaviour using occupancy devices
  • Iterate to improve using data gathered throughout the building to deliver better products and experiences

OpenSensors has firmly backed Open Source and Open Data as the best way to yield value from the Internet of Things choosing to collaborate with the tech community to enable facilities managers to build higher order systems focused on their domain expertise. Please contact commercialteam@opensensors.io should you have a need for a smart building workshop or are ready to build your next generation smart building product.

Don’t Make Me Think

Unlocking great user experience in buildings boils down to data

Expect the early adopters of ‘enchanted’ buildings to be our employers, the world green building council estimate we spend 10% of our costs on facilities management, 90% is the expense of executing our business. You don’t have to be an accountant to realise a 1% improvement in productivity trumps a 1% saving facilities costs by 9 to 1! So how might smart buildings deliver productivity and improved user experience (UX)?

Great UX should be pain free,”Don’t make me think” (Steve Krug). Whilst smart phones offer a means of logging in to a workplace, it’s a bind to install the app, to login, to connect and privacy and indoor location services are a challenge. IoT tech such as OpenSensors, beacons, noise and air quality sensors, coupled with responsible anonymisation can deliver on productivity because improved building and personal wellness simply means we get more done. But how might this work?

Aarron Walter said “Designers shooting for usable is like a chef shooting for edible.”, as techies we can apply these ideas to civic interactions. Take a large office space, I arrive from out of town, I’m visiting for a meeting with my project team. I register, head off to the flexible space and grab a desk, perhaps wasting time trying to find my guys. Each of the team then arrive, some may co-locate, others disperse, there’s no convenient breakout space; the collaboration is diluted and we’re disturbing others. We ate but it wasn’t a great meal.

The lack of an inexpensive, robust, secure and open tech stack rendered us powerless, we have been consuming ‘edible’ tenant experiences rather than a delightful meal. But tech is moving fast; expect new digital services enabled by advances in IoT hardware and data software to shake down the industry. Organisations ready to invest and experiment will move ahead, they’ll develop an ‘edge’ that will define their services and branding for years to come.

Digital concierge – expect to sign in digitally on a device that will bind you, your calendar, your co-workers and your building. Through data expect intelligent routing to the best work space for your or your groups needs.

Location based services – sensors enable ‘just in time’ cleaning services that clear flexible working space when meetings conclude, or sweep loitering coffee cups and deliver fresh coffee during breaks in longer workshops.

Environmental factors – expect IoT to bubble up environmental data such as air quality, temperature, humidity, light and noise that can be used to adjust HVAC systems in real time, or to aide interior designers in improving the workplace.

Smart facilities management – location based services coupled with smart energy grid technology will allow fine tuning of energy supply reacting to changes in demand and national grid status (smart grid frequency response).

Data science – each of the above services a specific need whilst wrangling data sets into an ordered store. Technology like OpenSensors can then add further value through real time dashboarding for health and safety or real time productivity management. Furthermore, once data is captured we can apply machine learning to deeper understand the interactions of our human resources and physical assets through A/B testing or other data science.

Unlocking great UX in buildings boils down to data; capturing it, wrangling it, applying science and iterating to make things better. First we must gather the data from the systems in place (see First ‘Things’ First) whilst supplementing from new devices such as air quality, occupancy through sensors or beacons. Having provided a robust data fabric tenants need to become active rather than passive, agile rather than rigid in their approach to managing their assets. IoT devices and data services will deliver an edge for delivering the best of breed user experience that tenants value so highly.

European Parliament Approves eCall Technology

The Internet of Things threatens to revolutionise everyday life

European Parliament approves eCall connected car platform

The Internet of Things threatens to revolutionise everyday life, embedding and imbuing everyday objects and the world around us with sensors, software and electronics. Through machine-to-machine communication, automation and advanced analytics, we are able to understand and scrutinise our environment and the processes which surround us in ways never conceived. From high level analysis allowing automated condition monitoring of critical engine parts, giving engineers the tools to reduce costly operational downtime to embedding real-time sensors in bridges to predict stresses and flooding. Beyond the Cloud, the Internet of Things brings the internet to the everyday, and there are clear use cases for such technologies in the realm of road safety.

This is where eCall comes in. eCall is a European Commission initiative coming into force on 31 March 2018, making mandatory the deployment of internet-connected sensors into cars that enable emergency services to be immediately contacted and requested automatically after a serious road incident within the European Union. EC VP for Digital, Neelie Kroes, argues “EU-wide eCall is a big step forward for road safety. When you need emergency support it’s much better to be connected than to be alone.” eCall will drastically cut European emergency service response times, even in cases where passengers are unable to speak through injury, by sending a Minimum Set of Data (MSD), including the exact location of the crash site.

The deployment of eCall is one of most ambitious EU-wide programs since the 2007 enlargement, rolling out implementation of the eCall platform to some 230 million cars and 33 million trucks in the European Union. Implementation of eCall at a European level (including Norway, Switzerland etc) however benefits consumers and industry through reducing costs due to economies of scale, reducing the installation cost to as little as €100. The basic pan-European eCall service will be free at the point of use for equipped vehicles. It is likely that the eCall technology platform (i.e., positioning, processing and communication modules) will be exploited commercially too, for stolen vehicle tracking, dynamic insurance schemes, eTolling and emerging forms of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) road safety systems. eCall will be based upon a standardised platform, one system for the entirety of Europe, aimed at enabling both car and telecoms industries a quick roll out and to avoid crippling OEM versioning and patching issues.

In terms of privacy, the basic eCall system has been given the green light by the European Commission on the express condition that firm data protection safeguards are in place and that the sensor-equipped vehicles will not push data to external servers except in the case of a crash, or by the actions of the driver, in order to contact the PSAP (Public Safety Answering Point) and will lie dormant until that point. The data transmitted to the emergency services, described as MSD, Minimum Set of Data, are those strictly needed by the emergency services to handle the emergency situation. While in normal operation mode the system is not registered to any telecoms network and no mediating parties have access to the MSD that is transmitted to the PSAPs.

Today the European Parliament’s Internal Market and Consumer Protection Committee MEPs voted on and approved eCall pushing forward a life-saving Internet of Things technology that will significantly improve European road safety. The UK Government however, has not followed suit, whilst welcoming the implementation in other member states, feels that “it is not cost-effective … given the increasing responsiveness of our road network, we feel that smart motorways do the same thing,” remarked Minister Perry on behalf of the Department of Transport. Whilst it can be argued that ‘Smart Motorways’ are far from a worthy substitute to connected cars & V2V/V2I systems, the UK’s criticism belies a certain caution with regards to green-lighting large and costly IT projects. Only time will tell whether the UK Govt’s decision has left those drivers not on Britain’s Smart Motorways in the lurch.

Monitoring for Earthquakes With Node-red

Develop your own earthquake-triggered workflows. Let’s shake it.

OpenSensors now capture seismic data from the Euro-Med Seismic Centre (EMSC) and the United States Geological Survey (USGS). Every ten minutes we are polling the latest information of major and minor earthquakes around the globe and make this information available via our programming interface (API) or as MQTT feed. In this short tutorial, we’re showing you how to use OpenSensors together with Node-RED to receive email alerts whenever there’s a major incident in a region of interest. You can use this guide as starting point for further experiments with Node-RED and develop your own earthquake-triggered workflows. Let’s shake it.

On OpenSensors

  • First, you need to login to your account on OpenSensor or sign up for one if you haven’t done so already at https://OpenSensors.io.
  • Next, it’s good practice to have a new ‘device’ for this application, i.e. a dedicated set of credentials you’re going to use to log in to OpenSensors for this particular set of MQTT feeds.
    • In the panel on the left, click My Devices in the Devices menu.
    • Click the yellow Create New Device button at the top of the page.
    • Optional: Add some optional descriptions and press the disk icon to save your new device.
    • Take a note of your ‘Client id’ and ‘Password’ as you’re going to need them in your Node-RED workflow.

For Node-RED

Install node.js and Node-RED on your system. There’s a very good guide for this on the Node-RED website. Follow the instructions, including the separate section on Running Node-RED.

Once you’re ready, open a web browser and direct it to localhost:1880, the default address and port of the Node-RED GUI on your system.

(A very basic description of the Node-RED vocabulary can also be found at SlideShare.)

Developing a workflow

  • From the input panel of your nodes library on the left side, drag and drop a pink mqtt input node into the work area named Sheet 1.
  • Double-click the mqtt node. A window with configuration details opens.
    • Click the pen symbol next to ‘Add new mqtt-broker…’. Your Broker is opensensors.io, your Client ID and Password those you generated in the previous step on the OpenSensors website, and User is your OpenSensor user name.
  • Once the Broker is defined, enter /orgs/EMSC/+ into the Topic field. This is going to instruct Node-RED to subscribe to all MQTT topics generated by the EMSC.
  • Optional: Set the Name of this node to ‘EMSC’.
  • Drag and drop a second mqtt input node. When you double-click the node, you will realise that the Broker settings default to the ones you previously entered.
    • Enter /orgs/USGS/+ in the Topics field and ‘USGS’ as optional Name.
  • Drag and drop a dark green debug node from the output panel on the left. While debugging has the connotation of fixing a problem, in Node-RED it’s the default way of directly communicating messages to the user.
  • Draw connection lines (“pipes”) from both mqtt nodes to the debug node.
  • Press the red Deploy button in the upper right corner. This starts your Node-RED flow. If everything worked, you should see ‘connected’ underneath the mqtt nodes and your debug panel (on the right) should soon produce the following JSON-formatted output if there’s an event (which may take a while!):

While it is pleasing to be informed about every time the earth shakes, it soon becomes tedious staring at the debug panel in expectation of an earthquake. Also, you may not be interested in events in remote areas of the world, or exactly in those – whatever interests you.

We are going to extend our flow with some decision making:

First, we need to parse the information from the EMSC and USGS. For this example, we’re going to be particularly interested in the fields region and magnitude. There are plenty more fields in their records, and you may want to adjust this flow to your needs.

  • Drag and drop a pale orange function node from the functions panel into your flow. Connect both mqtt nodes to the input side (the left side) of your function node. Function nodes allow you directly interact with your data using JavaScript.
  • Enter the following code (or download the OpenSensors workflow).

Here be a JavaScript course… :–) In a nutshell, this code takes data from the ‘payload’ of the incoming message (read up on the topic and payload concept of Node-RED in the SlideShare article suggested earlier). The payload is then parsed for the region and magnitude fields using standard regular expressions. If we can successfully extract information (in this case: the region containing ‘ia’ somewhere in it’s name), we’re going to set the outgoing message’s payload to the magnitude, its topic to ‘EVENT in ‘ plus the name of the region and pass it on (‘return msg’) to the next node.

  • Drag and drop a lime green switch node from the function panel into your workflow. Connect the output of the function node to the input of the switch node. Configure (by double-clicking) the switch node to assert if the payload (being the magnitude of the earthquake) is greater than 2. Only then the message is going to be passed on
  • Last, we’re going to drag and drop a light green e-mail output node from the social panel and configure it like an e-mail client, but with a default recipient: here in this case, ohmygodithappend@gmail.com.
  • Connect the output of the switch node to our debug node, as well as to the outgoing e-mail node.
  • We can then deploy the new workflow and should see something like this after a while:

In this case, an event was detected ‘off the coast of Northern California’ with a magnitude of 4.4 and at the same time, you should receive an e-mail with the region as subject and the magnitude in the body of the e-mail.

We hope that this flow is getting you started! Remember that Node-RED is superbly suited to interact with hardware… …imagine LEDs and buzzers indicating an earthquake.

The flow JSON: [{“id”:“e9024ae0.16fdb8”,“type”:“mqtt-broker”,“broker”:“opensensors.io”,“port”:“1883”,“clientid”:“1646”},{“id”:“2952b879.d6ad48”,“type”:“mqtt in”,“name”:“EMSC”,“topic”:“/orgs/EMSC/+”,“broker”:“e9024ae0.16fdb8”,“x”:127,“y”:104,“z”:“82a1c632.7d5e38”,“wires”:[[“490a140f.b6f5ec”,“163677af.e9c988”]]},{“id”:“54239d6.fabdc64”,“type”:“mqtt in”,“name”:“USGS”,“topic”:“/orgs/USGS/+”,“broker”:“e9024ae0.16fdb8”,“x”:128,“y”:159,“z”:“82a1c632.7d5e38”,“wires”:[[“490a140f.b6f5ec”,“163677af.e9c988”]]},{“id”:“490a140f.b6f5ec”,“type”:“debug”,“name”:“”,“active”:true,“console”:“false”,“complete”:“false”,“x”:538,“y”:86,“z”:“82a1c632.7d5e38”,“wires”:[]},{“id”:“163677af.e9c988”,“type”:“function”,“name”:“parse”,“func”:“// uppercase the payload (different centres report in mixed formats)nmsg.payload = msg.payload.toUpperCase();nn// extracting interesting fields with regular expressions,n// instead of using JSON.parse which fails with null fieldsnvar places_with_ia_regex = new RegExp(“REGION”:“(.IA.)”,“UPDATED”);nvar result1 = places_with_ia_regex.exec(msg.payload);nnvar magnitude_regex = new RegExp(“MAGNITUDE”:([0-9].[0-9]+)“);nvar result2 = magnitude_regex.exec(msg.payload);nn// if successful, sets topic to the region and payload to the magnitudenif (result1 && result2) {n msg.topic = ‘EVENT in ’+result1[1];n msg.payload = result2[1];n return msg;n}”,“outputs”:1,“noerr”:0,“x”:296,“y”:251,“z”:“82a1c632.7d5e38”,“wires”:[[“64f4f2ea.9b0b0c”]]},{“id”:“64f4f2ea.9b0b0c”,“type”:“switch”,“name”:“at least magnitude 2”,“property”:“payload”,“rules”:[{“t”:“gte”,“v”:“2”}],“checkall”:“true”,“outputs”:1,“x”:428,“y”:179,“z”:“82a1c632.7d5e38”,“wires”:[[“490a140f.b6f5ec”,“f7bcc59c.084338”]]},{“id”:“f7bcc59c.084338”,“type”:“e-mail”,“server”:“smtp.gmail.com”,“port”:“465”,“name”:“ohmygodithappened@gmail.com”,“dname”:“”,“x”:581,“y”:256,“z”:“82a1c632.7d5e38”,“wires”:[]}]

First ‘Things’ First

What’s needed to make smart city data exchange a reality?

I was pleased to see the recent post by the ODI on the open-shared-closed data spectrum since it resonates with the challenges faced at OpenSensors. To date most of our commercial projects have been at the private end of the spectrum; they are challenging, they are innovative, but they are often not ingesting open data or publishing data as an exhaust.

Are we worried about private IoT messaging? Not too much. Most of our private clients choose to get their own house in order first, after all typically there’s a lot of opportunity to juice existing sensors. First ‘things’ first as they say.

The good news is these deployments are sowing the seeds of sharing behaviours by distributing content internally, releasing data that used to terminate and die. They are unlocking data and distributing for access via API for dashboards, data science and decision support, which is the first step on a journey to openness.

So as a tech company how do we lead our clients and help them deliver open data strategy? We provide the tools to allow organisations to manage data entitlements pushing themselves up the data spectrum to become open. Each of our clients will make their own journey to open up their content, our job is to deliver infrastructure allowing them to manage data at a privacy that works for them.

This is important stuff. IoT tech companies are developing the smart city data network, and we don’t want it to be private. We want pain free navigation from edge to edge of our urban data grid, whilst feeling secure and confident about the data we consume. Our platforms must secure data whilst facilitating its exchange and entitlement control, so what’s needed to make smart city data exchange a reality? A couple of things spring to mind, we need to …

Evolve Topics and Communities – Expect faster adoption of sharing behaviours within trusted communities. By curating communities with shared interests expect adoption of localised data exchange, say amongst tenants of a commercial property. Communities sharing data should ease the path to universal open data.

Evolve Exchange Mechanisms – Transparent pain free data exchange is key to delivering a functionally rich lean IoT data infrastructure, the alternative could be akin to a ‘european data mountain’ of needless and costly sensor deployments.

Building the tech stack for these needs is plenty of work, so as we define the business and technical models for IoT we need to act responsibly. Deploying and decommissioning software is cheap, just a couple of mouse clicks away. IoT deployments are very real, they consume natural resource, risk cluttering our environment and can loiter well past their usefulness.

Encouraging sharing behaviour within IoT through lean shared infrastructure will prevent waste. The alternative would be a legacy of urban junk, we made a mess of space by not decommissioning hardware, lets not do the same with our urban environment and keep it open and centred on communities.