New IPIO rules feature

IPIOrulesIPIO has a new programmable rules feature. (IPIO app v3.03 onwards)

Using the rules feature the app can be programmed to carry out an action based on an input event. For example: If the gate opens (input 1) then disarm alarm (output 1) and switch on lights (output 2).

The rules feature is ideal for building up sophisticated I/O behaviours on monitored sites.

The IPIO rules feature does not appear to all users. It is a powerful feature and only appears to users who have Portal Access. This enables site managers working in the ARC to create rules while not burdening end users with user interface elements that they will not use.

IPIO rules are based on “if that then this” type logic. If input 1 is triggered, then output 2 will arm.

There is a rules simulator feature that visually models the rule. Outputs can be matched, flipped or oppose an Input state.

There is a second section to rules (not shown above) that allows multiple outputs to follow each other. For example, if Output 2 is armed, then arm Output 3.

We would be pleased to hear about your applications based on rules, and we look forward to making the rules section even more powerful and responsive and intelligent in due course.

Navihedrons and Roy Stringer

I met Roy Stringer at an academic conference in Glasgow School of Art in ~1998. He was there talking about his Navihedron concept (It was a co-incidental meeting, I was at the conference speaking about a different subject). Something about Roy Stringer and his Navihedron idea made me remember it all this time.

Stringer’s idea was that an icosahedron (12 vertices’s, 20 edges, 30 surfaces) could be effectively used as a navigation system to present non linear content. Each point of the icosahedron would be a subject heading. Each point can be clicked. Two things happen when a point is clicked; 1. The Navihedron animates bringing the clicked point to the frontal position, 2. content relevant to the clicked point subject heading appears in a frame or content area located adjacent to the Navihedron.

It is a very pleasing effect. The animation is engaging. The viewer can revolve the Navihedron exploring the 12 points. Each point has five geometrically related points that offer the next step in the story. The viewer can create their own path through the story. It allows users to browse in a natural way and yet remain within a cohesive story. In comparison linear presentations seem rather boring. The key to it may be that the reader can select their own entry point rather than the start of a linear story. They can click on the point they are interested in. That point is the start to their story. Once they have selected their start they can control their own story rather than being directed through numbered linear pages.

When Stringer presented the Navihedron in 1998 a user could sign on to and build their own Navihedron and edit the headings. The idea was that a finished Navihedron could be exported and then used in a stand alone website.

Stringer used a physical model of the Navihedron using plastic tubes held together with elastic. Any subject could be explored using the model as a tool and prompt. I have made a few of these over the years and they are a great three dimensional connection between information and space.
Roy Stringer showing opposite points. Image credit:

Stringer felt that the Navihedron might be useful to enable students to put together a story about a subject they were studying. As an example of this I made a sketch showing a Navihedron in the context of a school history class about the Vietnam war. It is not a linear presentation of history – you can enter from any point of interest.
Authors sketch of a Non Linear history lesson style Navihedron

The Navihedron idea has had a wide reach. For example Andy Patrick formerly of Adjacency recalls Stringers influence in the Design of the early years of ecommerce sites including the Apple store.

Stringer was inspired by Ted Nelson who in 1965 first used the term “hypertext” to describe:
“nonsequential text, in which a reader was not constrained to read in any particular order, but could follow links and delve into the original document from a short quotation”.
Nelson says that HTML is what they were trying to prevent in his project XANADU. “HTML has “ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management”.

Here is the brilliant, thoughtful, Ted Nelson talking about the structure of computing, ZANADU and Transclusion.

(En passant: don’t miss these great Ted Nelson one liners)

Roy Stringer sadly died very young in 2000. Although his work had huge impact, somehow, Stringers Navihedron Site has disappeared from the web, actually it has fallen victim to what Nelson quips as a world of “ever-breaking links”. In fact I could not find any working Navihedron.

Image Credit: The front page for access to from Alan Levine‘s memories of Roy Stringer:

Perhaps there is something in Stringers Navihedron work that has been superseded by the all pervading paradigm of web based navigation. The menu at the top, sub menu down the side or dropping down, style website (this one included) has become dominant.

Daniel Brown of Amaze (the company Stringer founded) helped to develop the Navihedra site and he points out that Stringer’s original Navihedron was built using Shockwave. That technology has now been largely replaced by CSS and Java Script.

In EyeSpyFX we set out to see if we could rebuild a Navihedron using modern web friendly technologies. The Navihedron you see here is a cube and not an icosahedron as Stringer originally proposed – but the storytelling concept is essentially the same. We built it driven by curiosity really – just to see if we could do it. It is not quete performing as we intend but I think it is a good start (controllable and animated if viewing on a PC (browser dependent), just animated if viewing on Mobile). We would like to build more and in time develop it into a fully editable system for creating Navihedrons – we are searching for the commercial reason to push it forward. The Navihedron concept is intriguing – perhaps a lost web treasure. Fully implementing a Navihedron using web technologies is surprisingly difficult– it is going to be a long term project. This blog article and demo is just a step in the general direction.

App Life

Some iOS apps are now in their 10th year. In the EyeSpyFX app portfolio we haven’t got any that old – but we have some 7, 8 and 9 year olds. One of our apps, “My Webcam” would have been over ten years old but we retired it two years ago. In fact that app would be in its 16th year had it survived. Before its iOS manifestation “My Webcam” had a life on Blackberry and before that as a Java app on Nokia and Sony Ericsson.

Apps get discontinued for lots of reasons, for example:

  • Removed by the app store due to lack of updates
  • Competition from other apps
  • The app store closes
  • The original reason for the app does not exist anymore
  • No sustainable commercial rationale
  • The app function is now an OS function included in standard software
  • App is so bloated it is difficult and expensive to update
  • App was built using a previous development environment and re-factoring it is not worth the cost
  • The app gets subdivided into smaller apps

Memory of “My Webcam” prompts me to reflect on the life cycle of apps in a general sense. I wonder if you could go to the app store and do a filtered search by date of publication what the average app life would be. In ten years time will there be apps that are 20 years old?

Our experience as developers is that as apps grow old they also grow bigger and then even bigger as more time passes and features get added.

There are some practical limits to app growth. Ultimately an app has to fit on a phone shaped screen and there is a limit to how many buttons you can fit in. If you keep adding functionality and features to an app year after year it inevitably becomes bloated. The bloated app – perhaps now resembling something more like “enterprise software” departs from the very concept of an app: “a small neatly packaged part of the internet”.

So why do apps grow? Top reasons include:

  • We don’t want to have lots of apps – we just want one app so our customers find it easy to choose
  • The PC interface does this – so the app should do it as well
  • The UI on the product does it – so the app should do it as well
  • The user base is growing and we need to give the users new stuff
  • Some of our customers are specialised/power users and we need to support them.

These are good corporate reasons but they strain the app design and tends to forget about the plain old user who wants the app to do the one thing the app name/icon suggests.

Serving everybody also does a disservice to the specialised power user. They come to the app with their special requirement but find their special feature located down the back in an app whose whole design and evolution serves a more general use case.

Rethinking a mature app into separate apps enables the company to specifically serve user requirements, for example; to open the door, to view the camera, check the logs, to disarm the system, to view the recording from 2 weeks ago. It is of course tempting from a corporate point of view to keep all of these functions together in a super security app. However each function has a specific user group in mind. A suite of mini apps could be created with the name of each app reflecting the function or user role.

Subdividing mature multifunctional apps into generation 2 apps can help with making an app easy to understand and use again. The really difficult question is, when is the right time to stop developing and start subdividing?

The point of subdivision can arrive simply because of a corporate internal practical reason, being too costly to maintain for example. A more fruitful sort of subdivision can also occur as a result of a design review – led by users – and to give a new lease of apps – life.


Single and Stationary vs Multiple and Mobile

Single and Stationary vs Multiple and Mobile
In many security applications the PC client is seen as the primary interface to the system. Use of the PC client is a dedicated task normally carried out by a singular person checking for a specific item of interest.

When a mobile app is introduced – assuming that the app is easy and effective to use – the number of users tends to go up. The frequency and type of use also tends to increase.

If people don’t need to log on to a PC and can instead check a mobile app then they tend to check in more frequently. Also, more people check in. One person with the app says to the next to get the app and that it is easy and the user numbers grow. The sort of use tends to diversify. People find different reasons to check in. For some, the reason is security, the same as it ever was, others may use the logs to check to see who is in or out at present (staff levels). Others may check for crowds on the shop floor, others to see if the delivery lorry has been dispatched yet (workflow). Yes, some might use the system to see if there is a queue in the staff canteen.

Once the security system is made accessible in the form of a mobile app people find the data contained within useful for lots of different reasons. It is therefore generally true (certainly for security apps and maybe for other domains also) that users tend to be single and stationary or multiple and mobile.

EyeSpyFX user data suggests that the ratio is 1:3. Of course this will vary from application to application and installation to installation.

Android wish list for 2017

Android 1.0. HTC Dream - 2008

Android 1.0. HTC Dream – 2008

When the first Android phones were launched it was unclear (to me at least1) how the ideas of “search” and “mobile phone” would come together. (crazy, I know!)

Fast forward to 2017, voice command and search integration with a security camera app might, soon, allow a user to say the commands:
“Go to camera 34,
go back an hour,
go forward 5 minutes,
go back 1 minute,
zoom in,
pan left,
jump to live,
switch to Front Gate camera”.

The voice commands would control an app which would chromecast to a big screen.

This vision is not exceptionally fanciful as many security camera apps can do all of the above today – except using a visual touch UI.

Voice commands and search are closely connected. A voice command is inherently vague. Search is a key computational mechanism used to interpret a voice command and find a best-fit reply.

There are just two barriers holding back the vision as outlined above: 1) in app search and 2) custom voice commands.
1) In app search is available only in a very limited sense at present. You can have Google index the app manifest. App functions then show up when you do a relevant search. This however does nothing to help search the user generated content within an app.
Google have tried search of data held on private computers before. In 2004 Google launched a PC application called Desktop. Google Desktop indexed all data on your PC. The project was closed in 2011 because Google “switched focus to cloud based content storage”.
2) Requests for custom voice actions from third party app developers are currently closed. (also the case for SIRI btw)

Custom voice commands - not yet (Dec 2016)

Custom voice commands – not yet (Dec 2016)

With both in app search and custom voice actions not being available it seems like the vision for fully integrated voice control of apps is not viable – for now.

If OK Google and SIRI continue to grow in popularity will the pressure for custom voice commands also be the catalyst for enabling in app search?

Voice actions and in app search could be (more easily?) achieved if you move the location of apps from the phone to a google/apple account in the cloud. An added advantage of apps in the cloud is that we could log on from anywhere and use custom apps.

Choose Google or Apple

Choose Google or Apple

With thanks to uber, maps, cheating in pub quizzes and countless other uses it is now clear that search and phones are a perfect match. It seems (to me at least2) that the next wave of development for search and phones will involve voice commands. Voice command based interfaces also seem to fit well with wearables and control of IoT devices.

To conclude, a seasonal wish list for 2017:

  • In app search for user generated data
  • Custom voice commands made accessible to third party app developers
  • Move the concept of apps away from the phone and onto a Google account. No more downloading.

EyeSpyFX introduce a new library for reading H264 Video.

For Network Camera and VMS Manufacturers who need to build a Mobile Solution SFX100 is a library of code that enables iOS and Android apps to be built that decode and display MJPEG, H264 video using RTSP over TCP, RTSP over HTTP and RTSP over HTTPS.

Unlike bulky Open Source projects such as ffMPEG, Live555 and VLC, published under GPL or LGPL, SFX100 is a proprietary library available under licence that is ready for immediate and efficient deployment in commercial mobile projects.

SFX100 is optimised for Security Camera Video applications uniquely offering a secure layer for streaming RTSP tunneled over HTTPS.

SFX100 is exemplified in EyeSpyFX premier iOS mobile app “Doorcam”. (

Key features include:

  • Secure layer for streaming RTSP tunneled over HTTPS.
  • Per project commercial licence
  • Optimised code for security camera video types
  • iOS and Android libraries available
  • Reads RTSP streams and provides mechanism to pass to phone based native decoders
  • Compatible with IPv6

Contact us on for further information about how SFX100 can be deployed in mobile apps.

Security and the Internet of Things; The Internet of Security

The Internet of Things (IoT) is a hugely hyped concept. The hype is fueled by multi million dollar acquisitions such as the Google purchase of NEST. So far, much of the IoT action has been in the domestic consumer space.

One of the main ideas in IoT is the idea of Smart objects. In Security the tendency to build centralised server systems runs somewhat counter to the IoT idea of Smart objects.  In Security, intelligence, analytics and computational features tend to reside in the server rather than in the objects: the cameras, sensors and controllers that connect to the system. This contrasts with the consumer IoT where there are fewer central systems. In the consumer IOT features reside in the smart devices themselves, perhaps supported by generalized metadata from a cloud service. The NEST thermostat for example is a Smart object of itself not an object that relies on a connection to a smart server .

There are signs that Security is moving to a more edge based IoT style architecture. The AXIS Camera Companion system is one example of this. If an Internet of Security is to prosper then objects need to be discoverable and configurable and need to be able to respond to queries about the features they possess. In higher end security cameras this level of program-ability is already in place.

At IFSEC we have seen for several years now the development of Mobile Clients for server based Security Camera systems. Of course, this is good ,but really it is simply adding a mobile layer to an old Architecture. This trend contrasts with the next wave of mobile apps on ever more powerful mobile devices that can connect directly to cameras and other smart security devices and present customized UI elements to suit the properties of each individual device. Increasingly it is clear that a central server is not required. Instead edge devices organised and managed by a powerful, easy to use mobile apps stands to become a prevalent architectural model. Could this be the future Internet of Security?

At IFSEC14 EyeSpyFX are pleased to demonstrate an alpha version of our own Internet of Security product called Timeline. Timeline is a mobile app system that manages and enables video from AXIS Cameras and combines it with Access events from the new A1001 Access control product from AXIS. Timeline needs no server, all its capability is drawn from the mobile app. Essentially, Timeline is a Mobile Video Management System (mobileVMS). Timeline is an example of the next generation of mobile apps for the security industry: ultra light weight, agile and extremely powerful with a focus on ease of ease of use. To find out more about this potentially disruptive next wave of app technology call in and talk to us at stand B110 at IFSEC 14.

We would like to invite to installers and system integrators to join out join our advance thinking test flight group and help us shape the future of mobile  Security Camera Systems.

Timeline: The Internet of Security

Timeline: The Internet of Security

About Alert Notifications in Viewer for Axis Camera Companion


In App Notification’s based on Motion Detection events are now available as an In App Purchase from within Viewer for AXIS Camera Companion for iOS.

What is an “In App Notification”?

An “In App Notification” is a notification that arrives to an App. A Notification appears as a banner at the top of the. A Notification normally looks like this:


The Notification also appears in the Notification Centre.

How much does it cost?

Access to Notifications is enabled via an In App Purchase costing $2.99/£1.99/€2.69. You can buy the In App Purchase from here:


How to set up Motion Detection Notifications

Motion Detection events are used by the camera to trigger the sending Notifications to the iOS device.

You can adjust Motion Detection settings by going to the AXIS Camera Companion PC application. For example in the PC application you can constrain the Motion Detection to occur only in a set portion of the screen.

Once you have set up Motion Detection using the PC Application you can then set up Notifications using the Mobile App. In the Mobile App go to the camera you have set up with Motion Detection and select the Notifications control panel. Here you can switch on Notifications and set the times that Notifications are active.

Motion Detection Recordings

When a Motion Detection event occurs a corresponding recording is made and stored on the SD card of the camera. When a Notification is received and clicked it will open the app in the Recordings area. The Notification text includes the time that the Motion Detection event took place. You can navigate to the corresponding recording using the time in the recording name.

Not all Cameras suit the Notifications service.

Viewer for AXIS Camera Companion Notification Service is a powerful security feature. It does not suit every camera and should be deployed with due consideration. A camera looking at a busy view (for instance a busy shop floor) is not suitable to set up with Notifications. You will simply receive too many Notifications! The Notification service should be used on camera where movement is not normal or is of specific interest (for instance the back door in the store). In this case the Notifications will be fewer, appropriate and interesting.

Clothes & Phones

In any 2013 class of University students;
– everyone is wearing clothes
– most students have keys in their pockets
– most students have some cash money on them
– everyone has a phone
Isn’t it amazing that people have worn clothes for thousands of years, carried keys for hundreds of years and carried cash for hundreds of years but have only carried phones for just the last 5 – 10 years? Clothes and Phones are the only two items that all students carry. A revised “hierarchy of needs” might reasonably now link warmth and connectivity.

The phone has taken up permanent residence in people’s pockets and bags. Even while sleeping it often can be found under the pillow.
In view of our visceral wholly encompassing attachment to phones it seems rational to suppose that a body network will power other phone like interfaces that are more easily accessed than taking a phone out of a pocket and holding it to the ear to make a call. On reflection it seems absurd to carry a black rectangular box in your pocket and then lift and out and twiddle with it and put it back in your pocket. How did we get here? Will films made in 2013 be easily time calibrated because actors do the handheld phone maneuver?

Ideally, theoretically, phones and people may merge with a total embodiment of the phone into the nervous system? Just thinking about a phone call will cause instantaneous connection and thinking the words will automatically send a message. Of course an internally mounted phone/human scenario would be nice but it still seems like a very remote possibility, but exo-skeletal phone accessories are already commonplace. Bluetooth headsets for taking calls while driving are an indicator of what is to come. Men’s jackets commonly have two inside pockets, one for a wallet one for a phone, ☺. The Pebble watch concept offers easy to read texts and convenient switching on/off phone calls. The Pebble and the Headset together tend to point toward the idea of the phone increasingly staying put in the pocket while phone functions are carried out using a body network and peripheral accessories. Building on this idea Golden Krishna from Samsung tweeted at SXSW13 @goldenkrishna “we serve computers but its time computers serve us” #NoUI

So what are the challenges and where are the likely opportunities for a co-joined future of clothes and phones or indeed a bodily-embedded phone? Here is a quick look at the components of the problem, at least the components of the problem of the phone as it appears today.

It is hard to imagine a phone without a screen. The screen could be very small, small enough perhaps to fit into a contact lens, perhaps there may even be a Nano scale device that could be implanted on the retina. We could train our eyes to use the area of the retina that sees the screen. More realistically Google glasses and many other projects have shown the possibility of screens mounted into glasses. Of course the downside of needing to wear glasses has to be overlooked.
There have been lots of prototype foldable, roll-able and bendable, screens. Bendable screens could more easily fit the shape of the body. A folding screen could fold out to suit the size of screen required for the occasion.
Multiple screens positioned around the body could offer an alternative method to control the phone. The Pebble watch is a pioneer design in that idea domain. One could also imagine screens worn on a ring, a bracelet, a necklace.
Another idea would be that your phone could connect with any nearby screen, adopting a tablet or laptop as a temporary big screen.

Type input:
Typing stuff in has proven itself to be a very good survivor in the evolution of computing. Candidate ideas that could dispense with typing include; voice recognition, context based intelligence, gestures and a different sort of keyboard.
Voice recognition input of commands has recently been sent to the fore with the launch of SIRI. Some may say it was sent backwards with SIRI. Is voice recognition command input one of those since fiction wishes that turn out to be a real life disappointment (like video telephony)?
Another idea is that the phone can understand the context the user is in at any one moment and automatically deduces what you want to do and sets the command up for you without much or any human input.
Gestures could become a useful additional way to control your phone. To make a call we could simply small finger, index finger symbol for phone call to initiate a new call.
There have been lots of design suggestions for better keyboard layouts than QWERTY. None have been adopted. The power of “it is this way because that is the way it was yesterday” has taken very strong hold over keyboards.

Bulky heavy batteries cause problems for embedding phones into clothes and for wearing phones bodily. Charging the battery is also difficult. Today you need to plug the phone in somewhere – off body.
Potentially there may be an opportunity to trickle charge a battery derived from the kinetic and or heat energy of the body. This charging could be supplemented with solar charging. If the battery was being constantly charged maybe it could be smaller and if that were so then perhaps it could be concealed in an item of clothing or implanted bodily in some soft tissue.

To say any technological problem is solved is to in part suggest that further innovation is not required. That is not the case with storage, however current storage technology such as a 32GB micro SD card is small enough and good enough to be sewn into a t – shirt or embedded in the body with a day surgery procedure. Unlimited storage can be accessed in the Cloud. Problem solved!

We already have a powerful computer in a phone. Moore’s law suggests that computing power will increase exponentially. The processor part of the phone is already small, it seems certain that we can wear or embed a powerful processor on or in the body. Biotechnology based developments may even provide us in the future with computation ability built using the living fabric of the body. So it seems that phones and clothes are destined for each other but that is only a start point.

Systems and software:
This is where all the action is going to be. The idea of a worn personal computer – a computer for life – is unprecedented. Pop up context based alerts with relevant information served at the right moment and context ranging from short texts to rich media seems like a certain area for development. We are beginning to see the first clues as to how that concept may form in the way that people are using smart phones, notifications and apps today.