New IPIO rules feature

IPIOrulesIPIO has a new programmable rules feature. (IPIO app v3.03 onwards)

Using the rules feature the app can be programmed to carry out an action based on an input event. For example: If the gate opens (input 1) then disarm alarm (output 1) and switch on lights (output 2).

The rules feature is ideal for building up sophisticated I/O behaviours on monitored sites.

The IPIO rules feature does not appear to all users. It is a powerful feature and only appears to users who have Portal Access. This enables site managers working in the ARC to create rules while not burdening end users with user interface elements that they will not use.

IPIO rules are based on “if that then this” type logic. If input 1 is triggered, then output 2 will arm.

There is a rules simulator feature that visually models the rule. Outputs can be matched, flipped or oppose an Input state.

There is a second section to rules (not shown above) that allows multiple outputs to follow each other. For example, if Output 2 is armed, then arm Output 3.

We would be pleased to hear about your applications based on rules, and we look forward to making the rules section even more powerful and responsive and intelligent in due course.

Eight observations about IoT app development that have become our values.

We began working on IoT and security camera applications in 2002. Over the years we have made some observations that have become our development values.

An app is only part of an IoT system. It is the part of the system that the user touches and it is often the only way that the users experience the system. Behind the App screen there are appliances, cloud services, computation, storage, API’s, connectivity, time, etc. All of these elements combine to give the end user an intelligent effect and control rendered seamlessly in the mobile app. We think of the intelligence of the app in the context of the whole system performance.

A powerful complex app cannot always be simple and easy to use but it must be comprehensible. IoT apps can be hard to understand because the app provides just a limited window on a partially hidden system. IoT apps often involve actions that occur automatically. This can disorientate a user. We work hard to make unavoidably complex merged physical and computational environments comprehensible.

Apps can be agents working on your behalf. The user interface is a place where you set up rules and adjust preferences. Once set up the app should run semi independently. When something happens in the system that requires human attention the app should bring the issue to the user’s attention setting out the causes and possible remedies.

An app should remember what you did in a particular context – and help you achieve that outcome again if the context is similar

We stand over all our code. There are no black boxes of unknown code components.  All storage is encrypted. All communications are encrypted and authenticated.

We count the clicks. Each click is valued. The UI time is measured. The server and appliance response times are also measured.  We indicate the total process expected and elapsed time. Communicating what is expected and when it is expected is part of making the app experience comprehensible.

100% reliability is not possible even if the system is as simple as a kitchen light switch. Reporting, mitigation and notification are all strategies that we use to help manage reliability failures in complex IoT systems.

There are competing forces. It is critical to know who is doing what and that permissions are appropriate and manageable. It is also critical for the user to know that the system is secure, true and responsive. Competing against that is the chance that security layers are over burdensome for activities that don’t need to be secure. We get the balance right.

Also see Navihedrons

Technology Readiness Levels

We have been working on a number of projects recently featuring TRL appraisals. Here is a handy reference TRL 1 to 9. We found it a useful way to think about long term R+D IoT projects and where they are at any one moment.

  • TRL 1 –basic principles observed
  • TRL 2 –technology concept formulated
  • TRL 3 –experimental proof of concept
  • TRL 4 –technology validated in laboratory
  • TRL 5 –technology validated in relevant environment (industrially relevant environment in the case of key enabling technologies)
  • TRL 6 –technology demonstrated in relevant environment (industrially relevant environment in the case of key enabling technologies)
  • TRL 7 –system prototype demonstration in operational environment
  • TRL 8 –system complete and qualified
  • TRL 9 –actual system proven in operational environment (competitive manufacturing in the case of key enabling technologies or in space)

Fulll TRL descriptions here:



Posted in IoT

Arm and Disarm monitored video alarms


IO Device

IPIO is a cloud based I/O device. It is designed for arming and disarming monitored video alarms and or opening gates. It can be used to replace a keypad. The IPIO device is robust and can be wall or table mounted.

End user Apps

IPIO Android and iOS mobile apps can support many IPIO sites. We have some customers with 100’s of sites, other with 10’s and for those with just one site the app auto configures to skip the site selection page (left). There are two levels of user, Manager and Guest.

Once you have entered a site in the app you can arm all – to secure the premises entirely or simply arm a zone (centre). All buttons can be renamed or removed from the app interface.

A full log of events available to Managers provides a full evidence trace of who arms or disarms and when (right).

Monitoring Centre Portal

For monitoring centres there is a web portal where all IPIO units can be monitored and managed.


Find out more here:

Observations about Agent apps

An app for that
Golden Krishna explained the “No UI” concept in his famous 2013 talk “The best interface is no interface”.
Others have spoken of “Invisible design”.

Part of what is underlying these ideas is the concept that a system will help the user to get rid of certain small tasks. Who needs a door handle when you can have an automatic sliding door?

The idea that there is “an app for that” and this and everything has led to some world weary tiredness. That combined with the knowledge that a phone is a computer which can of course automate tasks leads to a sense that there simply must be a better way forward than making more dumb apps.

Minimal chrome
Most of the apps we build in EyeSpyFX are apps for viewing Security Camera Systems. The concepts of #NoUI or #Invisibledesign do not really apply to EyeSpyFX – the apps we build are necessarily visual, driven by the main function – viewing cameras. Nonetheless, we have tried to reduce the percentage of UI (sometimes derided as chrome) in our apps. For example, we have worked on Camera Apps where we put the camera stream as the main content and only introduce UI elements when the context demands. We have also tried to hide the pesky UI chrome behind a side-swipe away navigation bar. Another strategy we tried is to use semi transparent UI elements. All of these strategies reduce the UI and attempt to bring content to the user with minimal UI interactive hassle. All that is just good design rationalisation but you still wind up needing to go to the app, turn it on and look at stuff and take an action – the same as it ever was. All minimal UI strategies fall far short of automation, short of intelligence and short of a computing promise where things just happen on your behalf, good things, on time and appropriate – like automatic sliding doors.

Recent EyeSpyFX projects in the area of access control have enabled us to build apps that are closer to true #NoUI. Our interpretation of #NoUI is an app that is essentially a settings configuration tool. It is where you set up an agent service. Once set up and under normal circumstances the app does everything on your behalf without any further involvement from you. When the app does need your intervention, it sends you a rich notification enabling you to choose to take an action without opening the app.

Agent apps
The app is a context aware agent acting on your behalf, responding to conditions according to settings you have programmed in. A design challenge in apps like that is not so much how to convey how to use the app, it is how to convey how to programme the app to have a particular behavior in some future set of circumstances. It can be difficult to balance computational sophistication and power with comprehension and useability. In our projects we are trying to make #NoUI come true. We are  building intelligent effects combining the thing (camera, access control unit) with the cloud service, sensors inputs and user profile. We are still mid-project but we can make the following observations:

  1. It is not easy to get right (we keep on finding exceptions)
  2. The app needs to able to run in the background and make server calls in the background. We have encountered permission issues, privacy issues and variation in phone OS performance.
  3. Loss of connectivity problems. What happens to an automated service when it losses connection?
  4. You need to able to override the agent service with an immediate manual activation.
  5. You need to able to develop a UI that enable the user to understand, model and edit the process before and during the operation.
Programmable app UI

Programmable app UI


Navihedrons and Roy Stringer

I met Roy Stringer at an academic conference in Glasgow School of Art in ~1998. He was there talking about his Navihedron concept (It was a co-incidental meeting, I was at the conference speaking about a different subject). Something about Roy Stringer and his Navihedron idea made me remember it all this time.

Stringer’s idea was that an icosahedron (12 vertices’s, 20 edges, 30 surfaces) could be effectively used as a navigation system to present non linear content. Each point of the icosahedron would be a subject heading. Each point can be clicked. Two things happen when a point is clicked; 1. The Navihedron animates bringing the clicked point to the frontal position, 2. content relevant to the clicked point subject heading appears in a frame or content area located adjacent to the Navihedron.

It is a very pleasing effect. The animation is engaging. The viewer can revolve the Navihedron exploring the 12 points. Each point has five geometrically related points that offer the next step in the story. The viewer can create their own path through the story. It allows users to browse in a natural way and yet remain within a cohesive story. In comparison linear presentations seem rather boring. The key to it may be that the reader can select their own entry point rather than the start of a linear story. They can click on the point they are interested in. That point is the start to their story. Once they have selected their start they can control their own story rather than being directed through numbered linear pages.

When Stringer presented the Navihedron in 1998 a user could sign on to and build their own Navihedron and edit the headings. The idea was that a finished Navihedron could be exported and then used in a stand alone website.

Stringer used a physical model of the Navihedron using plastic tubes held together with elastic. Any subject could be explored using the model as a tool and prompt. I have made a few of these over the years and they are a great three dimensional connection between information and space.
Roy Stringer showing opposite points. Image credit:

Stringer felt that the Navihedron might be useful to enable students to put together a story about a subject they were studying. As an example of this I made a sketch showing a Navihedron in the context of a school history class about the Vietnam war. It is not a linear presentation of history – you can enter from any point of interest.
Authors sketch of a Non Linear history lesson style Navihedron

The Navihedron idea has had a wide reach. For example Andy Patrick formerly of Adjacency recalls Stringers influence in the Design of the early years of ecommerce sites including the Apple store.

Stringer was inspired by Ted Nelson who in 1965 first used the term “hypertext” to describe:
“nonsequential text, in which a reader was not constrained to read in any particular order, but could follow links and delve into the original document from a short quotation”.
Nelson says that HTML is what they were trying to prevent in his project XANADU. “HTML has “ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management”.

Here is the brilliant, thoughtful, Ted Nelson talking about the structure of computing, ZANADU and Transclusion.

(En passant: don’t miss these great Ted Nelson one liners)

Roy Stringer sadly died very young in 2000. Although his work had huge impact, somehow, Stringers Navihedron Site has disappeared from the web, actually it has fallen victim to what Nelson quips as a world of “ever-breaking links”. In fact I could not find any working Navihedron.

Image Credit: The front page for access to from Alan Levine‘s memories of Roy Stringer:

Perhaps there is something in Stringers Navihedron work that has been superseded by the all pervading paradigm of web based navigation. The menu at the top, sub menu down the side or dropping down, style website (this one included) has become dominant.

Daniel Brown of Amaze (the company Stringer founded) helped to develop the Navihedra site and he points out that Stringer’s original Navihedron was built using Shockwave. That technology has now been largely replaced by CSS and Java Script.

In EyeSpyFX we set out to see if we could rebuild a Navihedron using modern web friendly technologies. The Navihedron you see here is a cube and not an icosahedron as Stringer originally proposed – but the storytelling concept is essentially the same. We built it driven by curiosity really – just to see if we could do it. It is not quete performing as we intend but I think it is a good start (controllable and animated if viewing on a PC (browser dependent), just animated if viewing on Mobile). We would like to build more and in time develop it into a fully editable system for creating Navihedrons – we are searching for the commercial reason to push it forward. The Navihedron concept is intriguing – perhaps a lost web treasure. Fully implementing a Navihedron using web technologies is surprisingly difficult– it is going to be a long term project. This blog article and demo is just a step in the general direction.

App Life

Some iOS apps are now in their 10th year. In the EyeSpyFX app portfolio we haven’t got any that old – but we have some 7, 8 and 9 year olds. One of our apps, “My Webcam” would have been over ten years old but we retired it two years ago. In fact that app would be in its 16th year had it survived. Before its iOS manifestation “My Webcam” had a life on Blackberry and before that as a Java app on Nokia and Sony Ericsson.

Apps get discontinued for lots of reasons, for example:

  • Removed by the app store due to lack of updates
  • Competition from other apps
  • The app store closes
  • The original reason for the app does not exist anymore
  • No sustainable commercial rationale
  • The app function is now an OS function included in standard software
  • App is so bloated it is difficult and expensive to update
  • App was built using a previous development environment and re-factoring it is not worth the cost
  • The app gets subdivided into smaller apps

Memory of “My Webcam” prompts me to reflect on the life cycle of apps in a general sense. I wonder if you could go to the app store and do a filtered search by date of publication what the average app life would be. In ten years time will there be apps that are 20 years old?

Our experience as developers is that as apps grow old they also grow bigger and then even bigger as more time passes and features get added.

There are some practical limits to app growth. Ultimately an app has to fit on a phone shaped screen and there is a limit to how many buttons you can fit in. If you keep adding functionality and features to an app year after year it inevitably becomes bloated. The bloated app – perhaps now resembling something more like “enterprise software” departs from the very concept of an app: “a small neatly packaged part of the internet”.

So why do apps grow? Top reasons include:

  • We don’t want to have lots of apps – we just want one app so our customers find it easy to choose
  • The PC interface does this – so the app should do it as well
  • The UI on the product does it – so the app should do it as well
  • The user base is growing and we need to give the users new stuff
  • Some of our customers are specialised/power users and we need to support them.

These are good corporate reasons but they strain the app design and tends to forget about the plain old user who wants the app to do the one thing the app name/icon suggests.

Serving everybody also does a disservice to the specialised power user. They come to the app with their special requirement but find their special feature located down the back in an app whose whole design and evolution serves a more general use case.

Rethinking a mature app into separate apps enables the company to specifically serve user requirements, for example; to open the door, to view the camera, check the logs, to disarm the system, to view the recording from 2 weeks ago. It is of course tempting from a corporate point of view to keep all of these functions together in a super security app. However each function has a specific user group in mind. A suite of mini apps could be created with the name of each app reflecting the function or user role.

Subdividing mature multifunctional apps into generation 2 apps can help with making an app easy to understand and use again. The really difficult question is, when is the right time to stop developing and start subdividing?

The point of subdivision can arrive simply because of a corporate internal practical reason, being too costly to maintain for example. A more fruitful sort of subdivision can also occur as a result of a design review – led by users – and to give a new lease of apps – life.


Optimistic UI / Positive UI

In the world of the Internet of Things there is a growing design phenomenon called “Optimistic UI”.

Optimistic UI displays the action as complete at the same instant as the button press. When you press “On” the button will display the “On” state without any reference to the thing that is being switched “On”. There is an optimistic assumption made that the thing will just come “On”. Hit and Hope has been sanitized and is now called Optimistic UI.

Hit and hope leads to anxieties:

· Did it work?

· Was it already on?

· Do I have a data connection?

· Was my command received?

· Is there anybody out there?

A classic example of Optimistic UI is a pan tilt control for a security camera app. The command to pan left is shown as “done” before the camera has moved and the streamed image is transported to and displayed on the mobile device. As you use the app and experience the multi-clicks, overshoots and streaming lag you can actually see the optimism being defeated — and yet it prevails.

The argument for Optimistic UI is that it is better to show instant feedback than to have users wait for a confirmation signal. This design preference is compelling because computationally powerful, high-resolution touch screens are so seductive. By comparison the communications link between the device and the thing is less attractive and slower. It is tempting for designers to work with the computing and forget about the communications. User expectations also promote Optimistic UI. Most of our UI experience is based on instant feedback. For example: switching on a light or pressing the accelerator in a car. Instant feedback is also expected on many app UI’s. The mere inconvenient fact that IoT feedback signals are often high latency, delayed, narrow-band, remote or non-existent is simply ignored.

Figure 1: Optimistic UI switch on an app: Communications routing via a 2 bar mobile network, AWS, account management, home router, system hub and actuator to the target thing.

A more honest, less optimistic, approach is to deploy a ghost state. When you hit the button a “ghost state’ is displayed until the remote thing gives a 500 OK response. Then, only then, does the button show as “done”.

Going back through time there are many example methods and protocols for dealing with delayed switching. The EOT is just one example. The Engine Order Telegraph (EOT) on the bridge deck of a ship signals commands to the engine room. A full ahead command is dialed into the EOT up on the bridge. This causes the order dial to move and a bell to ring below in the engine room. The engineer hears the bell and reads the dial and sets the engine to full ahead. When the engineer gets the engine running at full speed he moves a second dial to full ahead. This shifts the corresponding confirmed dial on the bridge and rings a bell to indicate, “full ahead — now”.

Figure 2: In this EOT photo we might assume the Ship is stationary as the SLOW AHEAD has been ordered but the engine status is STOP.

Optimistic UI is misses a great opportunity. Rather than ignoring the process perhaps a better approach is to make the process a UI feature. The beautiful J.W Ray & Co EOT above may serve as point of inspiration. It is possible to re-think the UI so that it does give an indication of process in action sparked by a button press. Facebook Messenger and What’s App indicate when a message is sent and when it is received. Ghost states and other process indicators do not need to visually dominate the UI but they can significantly enhance the overall information value.

Figure 3: EyeSpyFX IPIO app: Front Door is Disarmed, back Door is Armed, Yard is in process, showing a ghost state about to Disarmed, Hallway is in process, showing a ghost state about to be Armed.

For EyeSpyFX I/O control app; IPIO we have made a modest attempt at indicating the process in action. We have re-designed simple switches to show two distinct sides: an Off/Disarmed/Open side and an On/Armed/Closed side. When a switch is thrown a ghost state appears before an indication of the new state is confirmed by the system and displayed by the app.

We have called it Positive UI.

Single and Stationary vs Multiple and Mobile

Single and Stationary vs Multiple and Mobile
In many security applications the PC client is seen as the primary interface to the system. Use of the PC client is a dedicated task normally carried out by a singular person checking for a specific item of interest.

When a mobile app is introduced – assuming that the app is easy and effective to use – the number of users tends to go up. The frequency and type of use also tends to increase.

If people don’t need to log on to a PC and can instead check a mobile app then they tend to check in more frequently. Also, more people check in. One person with the app says to the next to get the app and that it is easy and the user numbers grow. The sort of use tends to diversify. People find different reasons to check in. For some, the reason is security, the same as it ever was, others may use the logs to check to see who is in or out at present (staff levels). Others may check for crowds on the shop floor, others to see if the delivery lorry has been dispatched yet (workflow). Yes, some might use the system to see if there is a queue in the staff canteen.

Once the security system is made accessible in the form of a mobile app people find the data contained within useful for lots of different reasons. It is therefore generally true (certainly for security apps and maybe for other domains also) that users tend to be single and stationary or multiple and mobile.

EyeSpyFX user data suggests that the ratio is 1:3. Of course this will vary from application to application and installation to installation.

Home thoughts from within

Maybe before the idea of the desktop was the idea of “home”. Home is where all things can be found and started from. It is a merge between destination and start point.

The Internet of Things (IoT) brings the idea of home one step further. Control and monitoring of the physical home and all the appliances within is now commonplace. Nest, Alexa, X-Box, Hive, Apple Home just a few of the vendors.

The boundary traditionally drawn around the computer screen has been eroded. Our home space and our information space have collided. In that space we are no longer visitors who can go back home, we are here, permanent participants in a physical and information merged habitat. (Predicted by Bill Mitchell in his 1995 book City of Bits).

A new home page is doubly valuable because the border between work and home has also dissolved. Work is not just a place that is a location, it is with you always, in your pocket as an email, as an IM, on a website, in a phone call. We are always “on”. There was a time, a few decades ago, when going to work and coming home were two discreet entities. Long before that they were one – when we were farming homesteaders. Now home and work seem closely aligned again. If a new all encompassing home can be captured its value is made even greater because it includes work.

Every manufacturer of home appliances – fridges, cookers, microwaves, vacuum cleaners, TVs, heating systems, lights, curtains windows all have IoT systems. In EyeSpyFX we are working on a number of these projects. Information companies also think of home as their natural territory. Google, Apple, Microsoft, Amazon all have Home systems.

The merge between physical space and information is incomplete at this moment in time. IoT devices are a still a little bit clunky – they have gateways and need to be paired. Information is still mostly accessed via a screen. But this is just a moment in time and our current model is just transitional. Steadily physical space and information are coming together. Expect high stakes and profound change!

There is also a cyber hippy point about harmony and home. Home is a personal and family living space. We protect it, nurture it and shape it. The attainment of home as a physical and information hybrid entity is a gold rush for the soul.