Android wish list for 2017

Android 1.0. HTC Dream - 2008

Android 1.0. HTC Dream – 2008

When the first Android phones were launched it was unclear (to me at least1) how the ideas of “search” and “mobile phone” would come together. (crazy, I know!)

Fast forward to 2017, voice command and search integration with a security camera app might, soon, allow a user to say the commands:
“Go to camera 34,
go back an hour,
go forward 5 minutes,
go back 1 minute,
zoom in,
pan left,
jump to live,
switch to Front Gate camera”.

The voice commands would control an app which would chromecast to a big screen.

This vision is not exceptionally fanciful as many security camera apps can do all of the above today – except using a visual touch UI.

Voice commands and search are closely connected. A voice command is inherently vague. Search is a key computational mechanism used to interpret a voice command and find a best-fit reply.

There are just two barriers holding back the vision as outlined above: 1) in app search and 2) custom voice commands.
1) In app search is available only in a very limited sense at present. You can have Google index the app manifest. App functions then show up when you do a relevant search. This however does nothing to help search the user generated content within an app.
Google have tried search of data held on private computers before. In 2004 Google launched a PC application called Desktop. Google Desktop indexed all data on your PC. The project was closed in 2011 because Google “switched focus to cloud based content storage”.
2) Requests for custom voice actions from third party app developers are currently closed. (also the case for SIRI btw)

Custom voice commands - not yet (Dec 2016)

Custom voice commands – not yet (Dec 2016)

With both in app search and custom voice actions not being available it seems like the vision for fully integrated voice control of apps is not viable – for now.

If OK Google and SIRI continue to grow in popularity will the pressure for custom voice commands also be the catalyst for enabling in app search?

Voice actions and in app search could be (more easily?) achieved if you move the location of apps from the phone to a google/apple account in the cloud. An added advantage of apps in the cloud is that we could log on from anywhere and use custom apps.

Choose Google or Apple

Choose Google or Apple

With thanks to uber, maps, cheating in pub quizzes and countless other uses it is now clear that search and phones are a perfect match. It seems (to me at least2) that the next wave of development for search and phones will involve voice commands. Voice command based interfaces also seem to fit well with wearables and control of IoT devices.

To conclude, a seasonal wish list for 2017:

  • In app search for user generated data
  • Custom voice commands made accessible to third party app developers
  • Move the concept of apps away from the phone and onto a Google account. No more downloading.

EyeSpyFX introduce a new library for reading H264 Video.

For Network Camera and VMS Manufacturers who need to build a Mobile Solution SFX100 is a library of code that enables iOS and Android apps to be built that decode and display MJPEG, H264 video using RTSP over TCP, RTSP over HTTP and RTSP over HTTPS.

Unlike bulky Open Source projects such as ffMPEG, Live555 and VLC, published under GPL or LGPL, SFX100 is a proprietary library available under licence that is ready for immediate and efficient deployment in commercial mobile projects.

SFX100 is optimised for Security Camera Video applications uniquely offering a secure layer for streaming RTSP tunneled over HTTPS.

SFX100 is exemplified in EyeSpyFX premier iOS mobile app “Doorcam”. (https://itunes.apple.com/gb/app/doorcam/id1060661561?mt=8)

Key features include:

  • Secure layer for streaming RTSP tunneled over HTTPS.
  • Per project commercial licence
  • Optimised code for security camera video types
  • iOS and Android libraries available
  • Reads RTSP streams and provides mechanism to pass to phone based native decoders
  • Compatible with IPv6

Contact us on info@eyespyfx.com for further information about how SFX100 can be deployed in mobile apps.

Security and the Internet of Things; The Internet of Security

The Internet of Things (IoT) is a hugely hyped concept. The hype is fueled by multi million dollar acquisitions such as the Google purchase of NEST. So far, much of the IoT action has been in the domestic consumer space.

One of the main ideas in IoT is the idea of Smart objects. In Security the tendency to build centralised server systems runs somewhat counter to the IoT idea of Smart objects.  In Security, intelligence, analytics and computational features tend to reside in the server rather than in the objects: the cameras, sensors and controllers that connect to the system. This contrasts with the consumer IoT where there are fewer central systems. In the consumer IOT features reside in the smart devices themselves, perhaps supported by generalized metadata from a cloud service. The NEST thermostat for example is a Smart object of itself not an object that relies on a connection to a smart server .

There are signs that Security is moving to a more edge based IoT style architecture. The AXIS Camera Companion system is one example of this. If an Internet of Security is to prosper then objects need to be discoverable and configurable and need to be able to respond to queries about the features they possess. In higher end security cameras this level of program-ability is already in place.

At IFSEC we have seen for several years now the development of Mobile Clients for server based Security Camera systems. Of course, this is good ,but really it is simply adding a mobile layer to an old Architecture. This trend contrasts with the next wave of mobile apps on ever more powerful mobile devices that can connect directly to cameras and other smart security devices and present customized UI elements to suit the properties of each individual device. Increasingly it is clear that a central server is not required. Instead edge devices organised and managed by a powerful, easy to use mobile apps stands to become a prevalent architectural model. Could this be the future Internet of Security?

At IFSEC14 EyeSpyFX are pleased to demonstrate an alpha version of our own Internet of Security product called Timeline. Timeline is a mobile app system that manages and enables video from AXIS Cameras and combines it with Access events from the new A1001 Access control product from AXIS. Timeline needs no server, all its capability is drawn from the mobile app. Essentially, Timeline is a Mobile Video Management System (mobileVMS). Timeline is an example of the next generation of mobile apps for the security industry: ultra light weight, agile and extremely powerful with a focus on ease of ease of use. To find out more about this potentially disruptive next wave of app technology call in and talk to us at stand B110 at IFSEC 14.

We would like to invite to installers and system integrators to join out join our advance thinking test flight group and help us shape the future of mobile  Security Camera Systems.

Timeline: The Internet of Security

Timeline: The Internet of Security

About Alert Notifications in Viewer for Axis Camera Companion

notifications500

In App Notification’s based on Motion Detection events are now available as an In App Purchase from within Viewer for AXIS Camera Companion for iOS.

What is an “In App Notification”?

An “In App Notification” is a notification that arrives to an App. A Notification appears as a banner at the top of the. A Notification normally looks like this:

notification

The Notification also appears in the Notification Centre.

How much does it cost?

Access to Notifications is enabled via an In App Purchase costing $2.99/£1.99/€2.69. You can buy the In App Purchase from here:

Inappnotifications

How to set up Motion Detection Notifications

Motion Detection events are used by the camera to trigger the sending Notifications to the iOS device.

You can adjust Motion Detection settings by going to the AXIS Camera Companion PC application. For example in the PC application you can constrain the Motion Detection to occur only in a set portion of the screen.

Once you have set up Motion Detection using the PC Application you can then set up Notifications using the Mobile App. In the Mobile App go to the camera you have set up with Motion Detection and select the Notifications control panel. Here you can switch on Notifications and set the times that Notifications are active.

Motion Detection Recordings

When a Motion Detection event occurs a corresponding recording is made and stored on the SD card of the camera. When a Notification is received and clicked it will open the app in the Recordings area. The Notification text includes the time that the Motion Detection event took place. You can navigate to the corresponding recording using the time in the recording name.

Not all Cameras suit the Notifications service.

Viewer for AXIS Camera Companion Notification Service is a powerful security feature. It does not suit every camera and should be deployed with due consideration. A camera looking at a busy view (for instance a busy shop floor) is not suitable to set up with Notifications. You will simply receive too many Notifications! The Notification service should be used on camera where movement is not normal or is of specific interest (for instance the back door in the store). In this case the Notifications will be fewer, appropriate and interesting.

Clothes & Phones

In any 2013 class of University students;
– everyone is wearing clothes
– most students have keys in their pockets
– most students have some cash money on them
– everyone has a phone
Isn’t it amazing that people have worn clothes for thousands of years, carried keys for hundreds of years and carried cash for hundreds of years but have only carried phones for just the last 5 – 10 years? Clothes and Phones are the only two items that all students carry. A revised “hierarchy of needs” might reasonably now link warmth and connectivity.

The phone has taken up permanent residence in people’s pockets and bags. Even while sleeping it often can be found under the pillow.
In view of our visceral wholly encompassing attachment to phones it seems rational to suppose that a body network will power other phone like interfaces that are more easily accessed than taking a phone out of a pocket and holding it to the ear to make a call. On reflection it seems absurd to carry a black rectangular box in your pocket and then lift and out and twiddle with it and put it back in your pocket. How did we get here? Will films made in 2013 be easily time calibrated because actors do the handheld phone maneuver?

Ideally, theoretically, phones and people may merge with a total embodiment of the phone into the nervous system? Just thinking about a phone call will cause instantaneous connection and thinking the words will automatically send a message. Of course an internally mounted phone/human scenario would be nice but it still seems like a very remote possibility, but exo-skeletal phone accessories are already commonplace. Bluetooth headsets for taking calls while driving are an indicator of what is to come. Men’s jackets commonly have two inside pockets, one for a wallet one for a phone, ☺. The Pebble watch concept offers easy to read texts and convenient switching on/off phone calls. The Pebble and the Headset together tend to point toward the idea of the phone increasingly staying put in the pocket while phone functions are carried out using a body network and peripheral accessories. Building on this idea Golden Krishna from Samsung tweeted at SXSW13 @goldenkrishna “we serve computers but its time computers serve us” #NoUI

So what are the challenges and where are the likely opportunities for a co-joined future of clothes and phones or indeed a bodily-embedded phone? Here is a quick look at the components of the problem, at least the components of the problem of the phone as it appears today.

Screens:
It is hard to imagine a phone without a screen. The screen could be very small, small enough perhaps to fit into a contact lens, perhaps there may even be a Nano scale device that could be implanted on the retina. We could train our eyes to use the area of the retina that sees the screen. More realistically Google glasses and many other projects have shown the possibility of screens mounted into glasses. Of course the downside of needing to wear glasses has to be overlooked.
There have been lots of prototype foldable, roll-able and bendable, screens. Bendable screens could more easily fit the shape of the body. A folding screen could fold out to suit the size of screen required for the occasion.
Multiple screens positioned around the body could offer an alternative method to control the phone. The Pebble watch is a pioneer design in that idea domain. One could also imagine screens worn on a ring, a bracelet, a necklace.
Another idea would be that your phone could connect with any nearby screen, adopting a tablet or laptop as a temporary big screen.

Type input:
Typing stuff in has proven itself to be a very good survivor in the evolution of computing. Candidate ideas that could dispense with typing include; voice recognition, context based intelligence, gestures and a different sort of keyboard.
Voice recognition input of commands has recently been sent to the fore with the launch of SIRI. Some may say it was sent backwards with SIRI. Is voice recognition command input one of those since fiction wishes that turn out to be a real life disappointment (like video telephony)?
Another idea is that the phone can understand the context the user is in at any one moment and automatically deduces what you want to do and sets the command up for you without much or any human input.
Gestures could become a useful additional way to control your phone. To make a call we could simply small finger, index finger symbol for phone call to initiate a new call.
There have been lots of design suggestions for better keyboard layouts than QWERTY. None have been adopted. The power of “it is this way because that is the way it was yesterday” has taken very strong hold over keyboards.

Batteries:
Bulky heavy batteries cause problems for embedding phones into clothes and for wearing phones bodily. Charging the battery is also difficult. Today you need to plug the phone in somewhere – off body.
Potentially there may be an opportunity to trickle charge a battery derived from the kinetic and or heat energy of the body. This charging could be supplemented with solar charging. If the battery was being constantly charged maybe it could be smaller and if that were so then perhaps it could be concealed in an item of clothing or implanted bodily in some soft tissue.

Storage:
To say any technological problem is solved is to in part suggest that further innovation is not required. That is not the case with storage, however current storage technology such as a 32GB micro SD card is small enough and good enough to be sewn into a t – shirt or embedded in the body with a day surgery procedure. Unlimited storage can be accessed in the Cloud. Problem solved!

Processing:
We already have a powerful computer in a phone. Moore’s law suggests that computing power will increase exponentially. The processor part of the phone is already small, it seems certain that we can wear or embed a powerful processor on or in the body. Biotechnology based developments may even provide us in the future with computation ability built using the living fabric of the body. So it seems that phones and clothes are destined for each other but that is only a start point.

Systems and software:
This is where all the action is going to be. The idea of a worn personal computer – a computer for life – is unprecedented. Pop up context based alerts with relevant information served at the right moment and context ranging from short texts to rich media seems like a certain area for development. We are beginning to see the first clues as to how that concept may form in the way that people are using smart phones, notifications and apps today.

Graduate Developers

We are looking for clever computer science graduates (June 2013) and/or programmers with 1 or 2 years development experience to join our team Derry/Londonderry.

About us
EyeSpyFX make enterprise strength mobile apps for security cameras.

Working on the leading edge of the Internet of Things, Machine to Machine Technolgy & Mobile Services. We create Mobile Applications across a range of platforms including iOS, Android, Blackberry, Windows Phone & Java Mobile. These Apps work with our own homegrown cloud services & server-side technology to allow our users real-time mobile access to their security.

We work closely with world leading security camera manufacturers & security system integrators. As well as creating our own in-house applications & services. And we have a number of exciting new projects that has us keen to recruit staff.

advert450

About you
Do you have a good foundation in Object Oriented Programming?
Are you ready to learn new skills & use new technologies?
Can you contribute new ideas & develop products with our team?
Then we’re looking for you.

Additionally (but not required), we’re looking for anyone with knowledge and/or experience in any of the following
– Java,
– C#
– Objective C
– Server Side technolgies
– Cloud computing & services

If you’re interested in working with us,
please send an email with your CV to: info@eyespyfx.com

PVM for AXIS Cameras

EyeSpyFX are pleased to announce our latest AXIS compatible app!

PVM for AVIS Cameras is the world’s first Public View Monitor App solution that works using iOS devices. This contrasts with the conventional solution, which involves computers or expensive proprietary hardware.

Public View Monitor for AXIS Cams sends up to 16 wireless camera feeds to your TV or monitor system.

pvmscene9450

The app integrates with AXIS Camera Companion, providing a neat modern surveillance solution for small businesses.

Key features:

  • Using a Digital or Composite AV Adaptor cable, plug your iOS device straight into your TV or PVM Monitor.
  • Fast setup: Wirelessly scans and logs into AXIS cameras from your local network
  • Display options: Cycle through at time intervals or go splitscreen.
  • Sitelist: Track multiple sites for complete security e.g. the office, the shop, outside.
  • Compatible with all AXIS camera systems

PVM Screenshots

PVM for AXIS Cams is available on the appstore

Get it now for iPhone, iPad, and iPod Touch.

We are hiring!

weneedyou

EyeSpyFX are a small team of specialist app developers making enterprise strength security cam apps. We are based in Derry/Londonderry in Northern Ireland. We are working exciting new projects at the leading edge of the IoT, M2M app technology and services.

We are looking for super clever computer science graduates and/or programmers with 1 or 2 years development experience.

We move quickly, you are flexible enough to weigh in wherever the need is.

You have meticulous attention to detail and expertise in Java, C Sharp and/or Objective C. Knowledge and experience of LAMP server side technologies would be an added advantage.

Contact us: info@eyespyfx.com

3 types of connectivity set up for Network Cameras: Number 3: CLOUD

cloud

Cameras connect to a user account service hosted in the cloud. Mobile viewer apps connect to the same user account to view live cameras and recordings. This type of service is known by many names, for example: VSaaS, (Video surveillance as a Service) AVHS (AXIS Video Hosting Service), HVR (Hosted Video Recording), VSP (Video Surveillance Provider), MVS (Managed Video Service).

A major advantage of a VHS system is that there is no need to open a port on your LAN. This makes it idea for locations where the It manager will never agree to open a Port – Doctors surgeries for example. It is also idea for businesses with multiple locations, for example franchise businesses. Storage of the recorded Video is offsite and this feature also provides some key advantages.

3 types of connectivity set up for Network Cameras:

Number 1: Direct Connection

Number 2: Camera Management System Connection

Number 1: Cloud