software

Moving to Dyson, and thoughts on academic research and industry

Posted on Updated on

So, it’s been about 10 weeks since I joined Dyson (makers of the 360 eye robot vacuum), having left Engineered Arts over the summer, and while the move itself was perhaps not the most opportune thing to happen at that particular point in time, I think that it has been a very good outcome in the grand scheme of things. I’m back in Research, which, after a stint in a pure development role, I now realize is where my heart is. Also, I’m in job where I can apply more of my broad skills set. Generally, I’d say that I’m happier with the direction of my life and career.

It has been about a year since I decided to move out of academia into industry, and while it has been a bit of a rollercoaster ride, I’ve had lots of experiences and insights that been good food for thought and reflection. Though my time at EA was short, and rather stressful at times, I learnt some very important things about the differences in the internal functioning of small (and now larger) companies and universities, as well as the differences between robotics development, and robotics research.

Running a small company is clearly very difficult, and I take my hat off to anyone who has the guts and endurance to give it a sustained go. I don’t think that I have those guts (at least not now). Also, in a small company resources are stretched and managing those resources is difficult, and when you are a resource, that can be a very rocky journey indeed. That is something that I didn’t really get in academia, so it was quite a learning curve to get used to that.

I think that one of the most important things that I learnt in EA was to do with software development/management. I always suspected that software development in a PhD environment followed some “bad practices”, and when I look back at how I was managing software during my PhD (it was all via Dropbox, with no version control!) I was lucky nothing messed up too badly. What I saw in EA was something that was far more extreme than anything else that I had seen previously and it was a big eye opener. I also paid a lot attention to the style of (python) coding that I saw at EA as I was working with a couple of professional and very experienced software developers. Needless to say, I learnt a lot about software development in my time there.

Dyson is a completely different kettle of fish. For a start, it is a much larger company compared to EA, but still a small fish in a big ocean. Also, it is quite widely known that it is a very secretive company, and as such, I can’t say much about it. However, that is part of what makes it such an exciting place to work, and also very different to the academic environment. This secrecy is a strange thing coming from academia, which traditionally is very open. It takes a little getting used to, but it a very important aspect of the company (again, I can’t say much about work).

Overall, I’m quite glad that I made the change to industry, as I’ve learnt alot, and I think that I have become a better engineer, and a better roboticist as a result, which is generally my goal. I’m also happy to be working with robots that are truly going into the “wild”, as I feel that I am closer to helping make robots make a meaningful impact to the world – I can see the fruits of my labour in the hands of real people/users. That gives me alot of job satisfaction.

I’ve always had an uneasy feeling that there is a disconnect between academic robotics research and the trajectory that it is trying to depict/push – this “all singing and dancing” robot that is inevitably coming – and how we are actually going to get there given the current state of the (social) robotics industry and the current trajectory. I strongly believe that we need the population to get used to idea of sharing the world around us (physical and perhaps cyber space) with autonomous robots ASAP, before we unveil these “all singing and dancing” robots.

From what I have seen, it think that this is vital in order to promote uptake of smarter future robots (the kind that academia is has in mind) – if we are uneasy with robots around us, we will never accept these future robots (particularly as they will be larger in general). With that, I generally feel that there is a lack of academic HRI research that addresses research issues that will impact (and help) industry in the next 5 years or so. This is the kind of time frame that will help companies move toward building robots that academia is aiming for. Make no mistake, companies like Aldebaran, Dyson, iRobot, Samsung, Honda, EA, ect, are at the forefront and cutting edge of manipulating the uptake and wide-scale perception of robots in the present, and they are holding the steering wheel that will direct the trajectory of the kinds of robots we will see in the future (based upon how people react now, not in 10 years time).

I guess that there is perhaps a little message in all of this – if you’re an academic, and asked me for a research advice, I’d encourage you to tackle practical issues and provide solutions that companies can pick up and run with in a fairly short time frame. The alternative to this is work that stays “hot and alive” in a research lab, but has far less utility outside the lab space. In essence it could be collecting dust until a industry is in a position to actually apply it (if it remembers and/or finds that the work was ever done).

I’m stopping here, as I’m not sure whether I’m drifting off topic from what I had in my head when I started writing this post. I do think that it captures some of my thoughts on academic research and how it applies to industry. I’ll probably mull it over a bit more, and dump my thoughts here at a later date as this is a topic I have been thinking about for a while. However, if you have an opinion on this, I’d love to hear it! Perhaps it’s a topic for the HRI conference panel?

Giving Nao some visual attention.

Video Posted on Updated on

Ever since reading Cynthia Breazeal’s book, “Designing Sociable Robots“, I’ve had this constant itch to implement her visual attention model on a robot, mainly the Nao as there’re four of them laying around in the lab these days. So, suffice to say that I’ve finally gotten around to scratching this particular itch, and boy does it feel good! 🙂

So, if you haven’t already read this book (and if you work in social robotics, shame on you), I highly recommend it! It’s full of lots in interesting insights and thoughts, and it is a sure read for any new MSc/PhD students that might be embarking on their research journeys.

To get to the point, in one of the chapters, Breazeal describes the vision system running on Kismet. This is actually something that was developed by Brian Scassellati (whilst working on “Cog”, if I recall), and I must say, I think that it is a little gem (hence why I wanted to see it run on the Nao). The model is intended to make the robot attend to things that it can see in the environment (e.g. things that move, people, objects, colours, ect) using basic visual features. Basically a bottom-up approach to visual processing: take lots of basic, simple features, and combine “upwards” to something that is more complex.

I’ve finally implemented the model, from scratch and made it run using either a Desktop webcam, or using it with an Aldebaran Nao. This little personal project also holds a more serious utility. I’m now beginning to make an online portfolio of my coding skills as I have seen some employers request example code recently (and I’m currently on a job hunt). I’ve made two YouTube videos of the model. The first is it running on my Desktop machine in the lab, where I talk through the model and the parameters that drive it. In the second video I show the slightly adapted version running with a Nao. Here are those two videos:

Part #1

Part #2

I have to admit that there is certainly room for improvement and fine tuning in the parameter settings, as well as some nice extensions. For example I had a bit of trouble as there is quite a lot of red in our office and the robot was immediately drawn to this. Either I need to change the method for attention point selection, or I need to take distance into account in some way (but there isn’t and RGBD sensor on the Nao at the moment). Currently for attention point selection I am finding all the pixels that share the same max value in the Saliency Map and finding the Center of Mass of the largest connected region of these. Alas in the videos this was sometimes background items…

Talking about possible extensions, I certainly see alot of room to have an adaptive mechanism that provides the “Top Down” task orientated control of the feature weights (at least) as was done with Kismet. There are a small subset of the different parameters driving the model and finding values that work can be a little tricky. Furthermore, I suspect as soon as you change setting, you will need to tweak parameters again.

Coding this system up also made me think about the blog post I wrote a about what a robot should do out of the box. I recall that the Nao was doing at least face detection and tracking. I pondered the idea of whether this kind of model would work as on out of the box program. Rather than having fixed weights, the robot could have some pre-set modes (as Kistmet did) and just cycle through these at different intervals. Perhaps the biggest problem will be the onboard processing that would need to happen. My program is multi-threaded (each feature map is computed in it’s own thread, as is the Nao motor control) and isn’t exactly computationally cheap, and so I can see it using quite a bit of the processing resources.

Anyway, there are lots of possibilities with this model both with respect to tweaking it, extending it, and merging it with other “modules” that do other things. As such, I’ve made the code available to download:

Desktop + webcam version (needs Qt SDK, OpenCV libs and ArUco libs): Link

Version for the Nao (needs Qt SDK, OpenCV libs, ArUco libs and NaoQi C++ SDK, v 1.14.5 in my case): Link

Note: With the NaoQi SDK, this isn’t free. You need to be a Developer and I have access through the Research Projects at Plymouth University. I can’t provide you with the SDK as this would go against the agreement we have with Aldebaran… Sorry… 😦

What should robots do “out of the box” in the future?

Posted on Updated on

So today, after some waiting, we got our Nao Evolution robot. As you might expect, it took very little time for the scissors to come out and open the box, revealing the nice new shiny Nao robot, which looks surprising like our V4 Nao (it’s even got the same fiery orange body “armour”). I took a little time to glance around looking for the new visible enhancements to the design which seems to only be the new layout of the directional microphones in the head. It would seem that the rest of the improvements lay underneath the plastic shell. So, time to hit the power knob and fire up the robot…

This is where I paid far more attention. I wanted to see what software/programs/apps Aldebaran have added to the “fresh out of the box” experience. I think that this is actually really important as when you’re opening your new £5000 robot (and it doesn’t have to be a robot), you really don’t expect the excitement and wow factor to die as soon as you realise the thing doesn’t actually do anything when you turn it on. That’s a real anti-climax! Booooo!

I have to say that today when we turned on Nao Evolution, I was rather pleasantly surprised. Nao’s Life was running as default, and it seemed that the robot was doing both face tracking and sound localisation out of the box. Basically, the robot looked at you and followed you with it’s gaze, as well as responding to sounds. However, we didn’t see anything verbal and no robotic sounds (unlike Pepper’s awakening). That said, it is basic social behaviour from the robot, and already it had our roboticists enthused. Clearly Aldebaran have gotten something right! However, that said, there was still computer setup to do (giving the robot a name, a username, password, wifi/internet connection, etc). In the future, it would be nice to see some of that migrate to the social interface that the robot affords.

All of this did get me thinking though. Nao has an app store, which is a bit sparse at the moment, but I predict will become more and more populated given that Aldebaran have also introduced their Atelier program. Furthermore, it reminded me of a conversation that I had at HRI’14 with Angelica Lim (who is now at Aldebaran) where we were musing about how you might get the robot to interface with the NaoStore autonomously, and suggest apps for users to try. An interesting line of thought in my view.

Today I found myself pondering this a little further. The NaoStore and app arrangement for the Nao seems very much like the Apple App Store and Google Play services. However, I wondered about what form the apps would take. Would they be very stand alone pieces of software, or would they needs a certain degree of inherent integration with the other vital pieces of software on the robot (for example user models). Remember, we have a social robot here, who in the future will likely have a personal social bond with you (and you with it). What might be the implications on how we design apps for social robots?

Should robot apps really take the form of individual pieces of software that act and behave very differently, and thus might change the personality/character of the robot. Should we even be able to start/stop/update apps, or should app management be something that we as users are oblivious to? The latter seems to be what the setup is with AskNAO at the moment, as teachers/carers have to set up a personal robot routine for each child, but it is unlikely that the child knows that this is happening in the background. To them I suspect that it is all the same robot who is making the decisions. The magic spell remains intact (but child-robot interaction is nice that way)…

What happens with grown ups though? Somehow I can see that in a perfect future, the robot would have a base “personality” or “character” of sorts that makes it unique from other robots (at least in the eyes of the users), and that it alone manages the apps that then run. You as the user could still explicitly ask for apps to be installed and query the NaoStore, but I can image that this would be secondary to the robot being able to recognise that downloading a certain app might be useful without explicitly being told to do so (though I recognise that app management will be critical in this case, we don’t need dormant apps taking up space). Perhaps something comes up in conversation with your robot, and it decides it would be worth getting an appropriate app (for example, you like telling and hearing jokes, so, Nao downloads a jokes app so that it can spontaneously tell you jokes in the future). This is probably a long way off, and certainly needs some very clever AI and cognition on the robot’s part, not to mention many, many creases ironed out. Thus, I suspect for the time being that we will be using technology such as laptops and tablets/phones as the in-between mediums though which we manage our robots. Sadly this sounds like our robots will be more like our phones and computers, rather than different entities all together.

To sum this all up, I guess that I am generally hypothesising that people’s perception of and attitudes towards robots that have an app store behind them might differ depending upon how apps are managed (managed by users themselves, or by the robot autonomously and unbeknownst to the user) and whether people even know of the existence of the app store… Could be some interesting experiments in there somewhere…