Daisy is a generic pattern recognition system. Generic means that it isn't limited to a single application domain (for example, pictures of London buses). If it is set up appropriately, it can recognise any pattern you care to throw at it.
Daisy identifies patterns – any patterns. Although it was envisaged as a system to rapidly identify insects and other invertebrates (to aid biodiversity and ecology studies), Daisy has proved to be able to do a lot more than this. It has been used to identify:
Daisy is a self-organising neural net. This means that it learns in a very similar way to the human brain. You give it some examples (or a training set) and then Daisy uses these examples to generate a neural net to make further identifications.
A training set is a set of examples of a class which Daisy is going to be used to identify. For example, if you wanted Daisy to identify butterflies from digital images of butterflies, you would need to give it about 20 example images for each sort of butterfly. Daisy would use these images to generate a classifier. It would also test the classifier using the training set to determine how accurate it is. This means that Daisy can attach a probability of being correct to every classification it performs.
Daisy is very accurate. Typically if Daisy says that it thinks x is and example of class y it is right on average 98% of the time (this is irrespective of the sort of pattern being classified). Daisy is a lot better at identifying things than a human observer. We have tested Daisy against human observers and have found in the best (or worst case) scenarios Daisy is giving 98% accurate classifications against and average of 32% for humans.
Yes, but not very much. You will need to either ‘crop’ the image so the object to be identified fills the field of view or using either mouse, stylus or finger (on touch-sensitive screens) mark out the object you want to identify. All this processing can easily be performed via the Daisy web interface.
Yes, two of them in fact. If you are a power user there is a comprehensive GTK+ based graphical user interface called DFE. You can use this to do classification, build training sets, test to see how accurate training sets are and to perform a host of image processing operations which make image data easier for Daisy to process. If you simply want to make an identification, there is a simplified web based interface, iDaisy, which can be accessed from web browsers like Google Chrome and Safari
Yes, it will but you will need a special front end for the some sorts of pattern. For example, if you wanted to use Daisy to analyse speech (and other sounds) you would need an interface which based on a technique like ADPCM which transforms continuous sounds into strings of phonemes which Daisy can analyse.
Yes, there is a simple web-based front end which can be used to make identifications.
It really depends on how many classes you need to identify and what you need to use the system for. If you need to identify things which can only belong to 20 or 30 classes, you could run the entire Daisy system on an iPhone or iPad. If you have a dynamic of thousands of classes (which learns and may have further classes added to it or discovered by it) you will need a rack of blade servers.
Daisy doesn't run directly under Windows. However, on modern hardware (Intel Core 2 or better, or AMD K8 or better) it can be run very efficiently within a guest Linux or BSD UNIX operating system using Oracle Virtual Box. Daisy will run natively on Mac OSX as this is a variant of BSD Unix. However, the web interface is designed to run through any web browser including Microsoft products.
Yes, the web-based interface is accessible via 3G mobile phones (e.g., smartphones such as the Apple iPhone). It can also be used from pad-based devices (such as the Apple iPad and iPad 2).
Yes, Daisy uses a self-organising neural net. If you give it more examples its performance will improve. Daisy can also learn all by itself. It is capable of integrating examples which are almost certain to be right into its training sets potentially increasing its performance.
Anybody who needs to classify things quickly and accurately. Historically Daisy has been used by taxonomists (people in museums who classify animals) and by ecologists (people who study the behaviour of animals and want to know what those animals are). However, Daisy technology would also be of use in a wide range of other areas; agriculture, security, retail, and medicine are obvious examples. Less obvious is petroleum prospecting (where Daisy can be been used to detect oil bearing strata and help guide test drilling).
Yes, to date it has classified insect, fish, dinosaur bones, human faces, the sounds made by human infants and more. It has also been used to identify the make and model of mobile phones (for recycling) based on their appearance and to identify grocery items using manufacturer and brand logos.
In general, the bigger the training set, the better the classification (but the longer it will take). Almost all the practical classifiers we have built have performed well (> 95% of material identified correctly) with 20-40 training set examples.
As many as you like, but the more classes you have the longer a classification will take. We have found that a mid-range desktop PC takes about 2-3 seconds to deliver a classification if there are about 100 classes in Daisy's database. If there are less than 50 classes, identifications are for all intents and purposes instantaneous.
Yes, it depends on how and what you want the system to do. Daisy can automatically direct the user towards a number of back ends which will provide the user with information about the object Daisy has identified. Examples include:
If you want to, Daisy can be asked to use a web search engine (Google, Bing, Yahoo!, etc.) to generate a list of possible information sources which it passes back to the user