Tag Archives: Artificial Intelligence

The glass was cracked, not broken


Google Glass is back

Customer value

Advances in wearable computing are affecting both the consumer and business space. Where wearable computing used to be science fiction territory, devices are now reaching the mass market for consumption, with the Google Glass being the most high-profile example: a pair of glasses augmented with a small display and a tiny computer with wireless networking and GPS functionality. At its core, it is just a tiny mobile computer with novel display technologies and user interfaces. This might seem unimpressive, but what is impressive is that the Glass puts the display directly in the user’s field of view and creates a user interface based on voice, gestures and taps of the glasses’ frame. (Gray, 2013) 

B2.1

The challenge with these wearable gadgets is to find a value proposition. Smart glasses need to add to the reasons people put glasses on their face. When the Glass was released, Google hoped that the early adopters would flesh out the value proposition, but the biggest challenge turned out to be the form factor of the Glass: many people do not enjoy wearing glasses. Given this behavioural observation, the value proposition to keep the Glass on your face had to be a good one. (Benbajarin, 2013)

Business model

The business model is an ecosystem platform and like all platforms, it uses an army of developers trying to create new value-adding apps. (Dashevsky & Hachman, 2014) Partners that built apps for the Glass ecosystem included Twitter, Facebook, CNN and Elle (Gaudin, 2013). Actually, Google did not really know what to do with the Glass, which is why they built a developer program first, attempting to use the wisdom of the crowd. (Shaughnessy, 2013)

Let’s have a look at the components. It all started with a product idea. The next step was validation. Through a crowdsourced competition, Google tried to find out what the Glass could be used for. The third step was rapid evaluation of the ideas. Next, the ecosystem was formed and developers were selected to line up in the ecosystem. The fifth step was financing and acquiring funds. The last component was the proposal of a tentative launch date of the Glass and improving, or iterating, the design with customer feedback.

Reflecting on this business model, it is obvious that Google’s own investments were relatively low, even after the invention phase was over. The developers were the ones bearing the costs. Therefore the main risk for Google was not a financial risk, but a reputational one: the risk of not getting the product right and having to close the project. (Shaughnessy, 2013)

Institutional environment

 Shortly after its launch, people began to fret about the social implications. Two questions dominated the debate: (1) Is the video component of the Glass a threat to our privacy? (2) Will people be able to concentrate on what is in front of them when they get distracted by the internet all the time?

Privacy

The problem is that people cannot consent to filming or being filmed by the Glass. With the Glass, Google is able to compute what a user is seeing and the idea that you can become part of someone else’s data collection was quite alarming to many. (Arthur, 2013)

“With a phone, the person I am taking a picture of will notice me; with the Glass nobody knows whether or not they are being watched, no matter what they are doing.” (Arthur, 2013; Klepic, 2014)

The Information Commissioner’s Office (ISO) warned about the use of wearables and the resulting chances on breaches of the Data Protection Act. The Glass’ wide scope for data collection led to more chances for breaking UK law than any other device. (Fox-Brewster, 2014) Should movie theatres, concert venues and casinos try to ban the Glass? And how are corporations going to stop employees from photographing confidential trade documents? (Klepic, 2014) Banning or restricting the Glass was also a major issue for restaurants, hospitals, sports grounds and banks (Gray R. , 2013)

Distraction

The second debate evolved around the question: Will people will able to concentrate on what is in front of them when they get distracted by the internet all the time? This legal question was about the safety of using the Glass in traffic. The Glass is supposed to stop people from looking at their phones, but people are fundamentally incapable of looking away from what they are doing for a few seconds without losing their concentration. If texting and calling while driving is illegal, how could constantly incoming notifications that are only an eye movement away be legal? (Klepic, 2014)

Why the glass broke

In January 2015 Google stopped selling the Glass, that was made available as an early prototype to fans and journalists in 2013. As described in the section “Business model” Google wanted to release the Glass to the public so customers could provide feedback that Google X could use to improve the design. (Colt, 2015) However, Glass Explorers treated it like a finished product, despite everyone at Google X knowing that the Glass was still a prototype with major functionality errors to be solved. (Bilton, 2015)

The section “Customer value” already described that it would be difficult to create customer value. Google advertised the Glass in terms of experience augmentation, while in reality, no one was comfortable with wearing the camera on their face in the way of normal social interaction. (Weidner, sd) The Glass failed to be  “cool”. Google desperately tried to make the Glass seem cool by putting it on models during Fashion Week, in fashion advertorials and in the hands of fashion influencers, eventually reinforcing that the Glass was not cool. This is a typical case of a post-modern marketing failure. (Haque, 2015)

The best explanation for why the Glass failed is that it entered the wrong market. The Glass could be a transformational tool for professionals, like truck drivers, train conductors, machine operators, police or airplane pilots. The problem is that Google did not target these professional and B2B audiences. Instead, they targeted journalists and celebrities. (Monetizing Innovation, 2016)

Raise the glass “The Glass is back”

Alphabet reintroduced the Glass to the world. It officially ended its initial ambition to make the Glass a consumer device, because of privacy concerns and because of the fact that the Glass simply looked unfashionable. Finally, the potential for use in business, as a tool for training, has been acknowledged. (Tsukayama, 2017) The Glass is now advertised as an enterprise focused device aimed at the healthcare, manufacturing and energy industry. Despite the first consumer preview being unsuccessful, it did reveal the potential of using the Glass in these specific institutional contexts. (Hern, 2015)

References

Arthur, C. (2013, March 6). Google Glass: is it a threat to our privacy? The Guardian: https://www.theguardian.com/technology/2013/mar/06/google-glass-threat-to-our-privacy

Benbajarin, B. (2013, September 16). Wearable Gadgets: In Search of a Value Proposition. Time: http://techland.time.com/2013/09/16/wearable-gadgets-in-search-of-a-value-proposition/

Bilton, N. (2015, February 4). Why Google Glass Broke. New York Times: https://www.nytimes.com/2015/02/05/style/why-google-glass-broke.html

Colt, S. (2015, February 4). Google knew Glass ‘wasn’t even close to ready,’ but Sergey Brin pushed it out. Business Insider: http://www.businessinsider.com/why-google-glass-failed-2015-2?international=true&r=US&IR=T

Dashevsky, E., & Hachman, M. (2014, April 15). 16 Cool Things You Can Do With Google Glass. PCMag: https://www.pcmag.com/feature/308711/16-cool-things-you-can-do-with-google-glass

Fox-Brewster, T. (2014, June 30). The Many Ways Google Glass Users Risk Breaking British Privacy Laws. Forbes | Security : https://www.forbes.com/sites/thomasbrewster/2014/06/30/the-many-ways-google-glass-users-risk-breaking-british-privacy-laws/#3068e6e147d8

Gaudin, S. (2013, May 16). Google Glass ecosystem grows with Twitter, Facebook and CNN apps. Computerworld: https://www.computerworld.com/article/2497625/emerging-technology/google-glass-ecosystem-grows-with-twitter–facebook-and-cnn-apps.html

Gray, P. (2013, May 14). The business value of Google Glass and wearable computing. Techrepublic: https://www.techrepublic.com/blog/tech-decision-maker/the-business-value-of-google-glass-and-wearable-computing/

Gray, R. (2013, December 4). The places where Google Glass is banned. The Telegraph: https://www.telegraph.co.uk/technology/google/10494231/The-places-where-Google-Glass-is-banned.html

Haque, U. (2015, January 30). Google Glass Failed Because It Just Wasn’t Cool. Harvard Business Review: https://hbr.org/2015/01/google-glass-failed-because-it-just-wasnt-cool

Hern, A. (2015, July 31). Google Glass is back! But now it’s for businesses? The Guardian: https://www.theguardian.com/technology/2015/jul/31/google-glass-wearable-computer-businesses

Klepic, J. (2014, January 23). People Aren’t Seeing the Legal Problems Ahead With Google Glass. Huffington Post: https://www.huffingtonpost.com/jure-klepic/people-arent-seeing-the-legal_b_4113417.html

Monetizing Innovation. (2016, April 28). The reason Google Glass failed that no one is talking about. Monetizing innovation: http://monetizinginnovation.com/2016/04/the-reason-google-glass-failed/

Shaughnessy, H. (2013, May 3). Google’s Innovative New Business Model For Google Glass. Forbes | Tech: https://www.forbes.com/sites/haydnshaughnessy/2013/05/03/the-radical-new-business-model-behind-google-glass/#7715cd6a3d8a

Tsukayama, H. (2017, July 18). Remember Google Glass? It’s back and ready for work. The Washington Post: https://www.washingtonpost.com/news/the-switch/wp/2017/07/18/remember-google-glass-its-back-and-ready-for-work/?utm_term=.2f69bcd0090f

Weidner, J. (sd). How & Why Google Glass Failed. Investopedia: https://www.investopedia.com/articles/investing/052115/how-why-google-glass-failed.asp

Stronger together! How co-creation unveiled image recognition applications.


A short story of image recognition applications for long-established businesses

What does image recognition evoke to you ? Tesla’s automatic pilot mode ?  Google’s automated image organization or Facebook’s face recognition system ?

All these applications are state-of-the art image recognition applications but yet they might not be the more profitable ones. Traditional business are often considered as laggard when it comes to technologic innovation but they actually carry the most added-value applications for computer vision. From automatic quality control to predictive maintenance, deeply-rooted companies are operated by many simple but repetitive tasks than can easily be automated with computer vision. But why don’t we hear about them ?

iStock-641361088.jpg

Long-established companies are facing many challenges to adapt their operations to computer vision technology. Often handicapped by their unsexy corporate images, they don’t attract talented data scientist and fall behind to develop AI applications. For this reason, many solution-provider companies started to offer a variety of off-the-self image recognition API. But once again, this approach was not satisfying. Most APIs had a too much restricted scope and performed poorly once used in the business environment.

In response to the lack of success of these APIs, more and more image recognition API providers companies are pivoting towards custom image recognition applications and it might finally be the right approach to bring AI into traditional companies’ operations. In order to tailor each system to business needs, it appeared that a strong collaboration is required between solution providers and clients. Therefore, it is relevant to present this new approach with the spectrum of value co-creation.

Co-creation principles of real-world image recognition applications

1. Custom, the system will be

As mentioned above, custom applications proved to be more way more efficient to solve businesses’ problems. Image recognition applications are systems that take in input an image and give an information about it on output. This information can be a tag (eg : there is a dog in this image) or an object localization for instance.  They are highly specific to each company and therefore need to be adapted every time.

2. Client’s image, you will use

To ensure satisfying performances, each applications should be build with customers images. By that, I mean that later on the application’s system will predict information from specific images and the model used in production should be be created with extremely similar images. I won’t go into details but keep in mind, that AI learn by examples and the more relevant the examples are, the more accurate the results will be. Be careful, some images can be qualified as personal data and has to respect personal data directives.

3. Involved, your client have to be

Unlike some others IT applications, defining requirement specifications won’t be enough to build a custom applications. Customers should be involved during the whole process in order to ensure that the final application match correctly the operations. For instance, if one company wish to automate quality control, it will need to define what tags are the best to represent the different type of defect on spare parts.

4. Labelling, your client will be in charge of

Finally, in cases where the customer is the expert, the only way to create custom systems implies to put client at work. As briefly mentioned before, to build image recognition model you need to show as many example as possible. To do so, you need to annotate every images with its corresponding tags and some tags requires an expertise only possessed by operators. For instance, there is a lot of excitements around automatic cancerous cell detection on medical images. To create an auto-diagnosis system, doctors need to teach algorithm to differentiate sane and cancerous of cells and it requires a specific annotation expertise that cannot be outsourced.

113129259

Information asymmetry has inhibited computer vision applications’s development as traditional companies have struggled to understand how it could benefit their business and AI companies to uncover potential use cases for them. Establishing co-creation relationship to build image recognition application might finally allows a faster integration of AI in traditional businesses. 

Deepomatic, making vision AI accessible to every businesses

pattern bleu sac 3a1fff

Let’s illustrate how these principles can be applied to a business model. French start-up deepomatic edits a software platform enabling businesses to build custom image recognition system. Starting from simple licence plan to more project-based sales, the start-up offers support to guide clients from use case ideation to application deployment. The relationship between them and their clients is structured around step by step meetings to define scope and tags, to collect images etc. The platform they designed helps to manage dataset and performance but also bridge deepomatic’s actions to its client’s. As it is possible to improve system’s performances over time, deepomatic designed the software as a human-in-the-loop platform : once in production, the system can still return images where the system is unconfident and client’s experts can annotate again and deploy a new version. This way, system can evolve over time to match operational changes and represent a strong example of a dynamic and customized product. 

For more information about deepomatic’s platform, click here.

References

deepomatic’s website : https://www.deepomatic.com/

Saarijävi et al (2013), “Value co-creation: theoretical approaches and practical implications”, European Business Review

Kohtamäki, Rajala (2016), “Theory and practice of value co-creation in B2B systems”, Industrial Marketing Management

How Deutsche Bank Crowdstorms the Future of Banking with Jovoto


Artificially Enhanced Banking

Deutsche Bank is Germany’s largest bank and has big markets in all continents. It provides wide-ranging financial services and like all financial companies is increasingly using online technology to provide these. Deutsche Bank believed they could use Artificial Intelligence (AI) to improve their business, but did not know how and were spending a lot of money researching this. Deutsche Bank therefore chose to collaborate with Jovoto, a company providing innovation platforms, to establish a co-creation project that got the public to provide it with ideas about AI. (Deutsche Bank, 2017)

Jovoto helps organizations to innovate. They set-up and manage online spaces that gather ideas about different questions posed by organizations. By doing this, Jovoto allows brands and NGOs to carry out a ‘co-creation process’ – to brainstorm at scale and to work out design and innovation challenges with more than 80 000 creative professionals. Jovoto call this ‘crowdstorming’, essentially a form of co-creation where the public and a company collaborate to generate ideas.  (Jovoto, 2017)

Schermafbeelding 2017-03-09 om 19.15.17.png

How did Deutsche Bank through Jovoto carry out co-creation?

Deutsche Bank got Jovoto to create an innovation competition, challenging the public to submit ideas and offering rewards for good ideas (key resource and process). Jovoto posted the challenge ‘share your vision of how Artificial Intelligence can help Deutsche bank reinvent its customer service experience’ and promised the best idea would win awards and cash prizes – see video below for a glimpse of the challenge.

 

 

Jovoto managed this competition and vetted who could contribute ideas, making sure the input was only from professionals. Jovoto determined the institutional environment of the platform. They make clear that all users keep the copyright of their ideas.

This competition offered ‘joint profitability’, providing gains for the company and public. For Deutsche Bank, it was a way to get new ideas about AI without having to invest in Research & Development (R&D). For professionals, the competition offered an opportunity to collaborate with a multinational (which boost their reputation) and possibly to win money (customer value proposition). In reality Deutsche Bank benefits more and so it had to make the competition prize attractive to motivate people to participate.

From the information available, it is not possible to tell how many ideas in total were submitted. Deutsche Bank said they had acquired 25 good ideas, but did not publish what these ideas were. As such, it is difficult to evaluate how successful this project was.

Analysis

In general, the project seems to have worked well for Deutsche Bank. Deutsche Bank clearly think it has been a success, because they have since done two more competitions with Jovoto. It should be noted that even though these projects may save on R&D costs, Deutsche Bank did have to spend money to carry them out.

Based on Deutsche Banks’ case, other companies should also consider using Jovoto to set up co-creation schemes. It allows companies to generate new insights from a wide range of experts around the world. The submitted ideas are in general of high quality (feasibility requirement is met) and therefore are of great value for the company. Using conventional R&D methods (which are more expensive) companies are unable to get so many inputs of ideas or such high-quality ideas.

 

Sources

Jovoto.com (2017). Available at: https://www.jovoto.com/creatives/. Accessed on 09/03/2017

Deutschebank.nl (2017) https://www.deutschebank.nl/nl/content/over_ons_campagnes_cfo_event_de_cfo_en_innovatie.html. Acessed on 09/03/2017

 

Research Framework, Strategies, And Applications Of Intelligent Agent Technologies (IATs) In Marketing


What is an agent?

Anything that perceives its environment through sensors and in return acts upon it(Russell and Norvig 1995).

What is an intelligent agent? An agent that displays machine learning abilities.

Does perhaps Amazon Alexa, Apple’s Siri, Google Assistant, Microsoft’s Cortana ring a bell?

In “Research framework, strategies, and applications of intelligent agent technologies (IATs) in marketing applied” the authors attempt to define how are these intelligent agent technologies used in the context of marketing and how can marketers understand and exploiting them. First step towards that direction was to try and establish a marketing centric definition. Hence, Intelligent Agent Technologies are according to authors:

Systems that operate in a complex dynamic environment and continuously perform marketing functions such as:

  • dynamically and continuously gathering any data that could influence marketing decisions
  • analyzing and learning from data to provide solutions/suggestions
  • implementing customer-focused strategies that create value (for customers and firms)

The second step was to classify all marketing applications of IATs in a way that would demonstrate relationships and differences among them. A useful and understandable tool for researchers and managers, the proposed marketing taxonomy is depicted below:

 

%ce%b4%ce%b9%ce%b1%cf%86%ce%ac%ce%bd%ce%b5%ce%b9%ce%b11

To answer all these research questions the authors reviewed the existing literature and then conducted 100 in depth interviews with managers form 50 randomly selected companies. Two independent researchers analyzed the interview data, that were then used to shape the taxonomy and the below framework.They also made some propositions that would help researchers and mainly managers to utilize IATs and ultimately drive company performance.

%ce%b4%ce%b9%ce%b1%cf%86%ce%ac%ce%bd%ce%b5%ce%b9%ce%b12

Overall, implementing the right IAT can assist the progress of numerous marketing functions permitting companies to achieve a sustainable competitive advantage. Both firms and customers can benefit from them. Companies are in a position to understand and put customers’ interest first (through collaborative filtering, personalization, recommendation systems) and in return gain customer loyalty and trust.On the other hand IATs offer consumers value, by providing them with convenience, better information, customized selection and less information overload (e.g price-comparison engines or agents that configure and customize their computer systems on the basis of their preferences).

Strengths and Weaknesses:

%ce%b4%ce%b9%ce%b1%cf%86%ce%ac%ce%bd%ce%b5%ce%b9%ce%b11Since there was no concrete research or a fully developed theory surrounding IATs in marketing and subsequently no certain phenomena or existing theoretical frameworks to test, the authors rightfully opted for the grounded theory approach.So in contrast to the traditional research method they tried to construct a theory by discerning which ideas and concepts are repeatedly used in the interview data. These patterns were then grouped into categories that formulated their theory and shaped both the taxonomy and the framework.

%ce%b4%ce%b9%ce%b1%cf%86%ce%ac%ce%bd%ce%b5%ce%b9%ce%b12

Although the authors reasonably based their analysis on grounded theory, whether they applied it correctly is another question. The fact that they reviewed the existing literature in order to formulate the interview questions somehow conflict with the grounded theory methodology. The goal of this approach is to discern natural patterns. However, the used questionnaires possibly inhibited this since they kind of predisposed the managers’ answers since the queries were literature related.

%ce%b4%ce%b9%ce%b1%cf%86%ce%ac%ce%bd%ce%b5%ce%b9%ce%b13Given further progress in recommender systems (or other means of reducing costs for the customer), a situation might arise in which a “ready-made” solution provided by the system delivers higher preference fit than a customer-designed product—which, on the other hand, delivers the advantage of enabling “I designed it myself” feelings.” (Franke, N., Schreier, M. and Kaiser, U, 2010). This poses a very serious question for companies. When it is preferable to let an agent customize, decide or recommend a product/website? How quickly and how frequently should the agents respond and adjust to user needs? Ultimately what is more beneficial for both parties, implementing agents or give consumers the freedom to tailor products and and websites to their needs. Perhaps, technological advancements and the machine learning capabilities of IATs could soon enable companies them to successfully distinct these two categories of consumers and accordingly present them the proper interface.

 

References:

Franke, N., Schreier, M. and Kaiser, U. (2010). The “I Designed It Myself” Effect in Mass Customization. Management Science, 56(1), pp.125-140.

Kumar, V., Dixit, A., Javalgi, R. and Dass, M. (2015). Research framework, strategies, and applications of intelligent agent technologies (IATs) in marketing. Journal of the Academy of Marketing Science, 44(1), pp.24-45.

Russell, S. and Norvig, P. (1995). Artificial Intelligence: A Modern Approach. 1st ed. Prentice Hall, p.31.

“Buy a present for my wife” said Jan to the phone


This year St. Valentine’s Day caught millions of men by surprise, again, leaving them wondering what present to buy for their partners. What if somebody or something could perform this burdensome task in a timely manner? There might be a solution…

Viv

Viv is an intelligent personal assistant introduced to the market on May 9, 216 and acquired by Samsung in October 2016. Similar products such as Siri, Google Now, Microsoft’s Cortana and Amazon’s Alexa can perform some basic tasks but nothing beyond the tasks they’ve been programmed to do. Due to artificial intelligence, Viv can generate code by itself and learn about the world as it gets exposed to more user requests and new information.

This makes it by no means a universal product. Viv is expected to learn and store information about every user, and learn with time how to serve him or her personally. For example, if the owner asks: “I need to buy a present for my life for St. Valentine’s Day”, Viv should be able to predict what a suitable present would be or perhaps book a table for two at a fancy restaurant downtown.

Continue reading “Buy a present for my wife” said Jan to the phone