AI use cases in four key industries

AI and machine learning should not be an end in itself for organizations; it should be adopted for sound business reasons. And as an omni-use technology, we see that happening in all sorts of industries and use cases – from agriculture to space exploration.

But as it begins to be adopted we see the use cases becoming industry-specific very quickly. One reason for that is that AI is driven by data, and each organizations data is unique; peculiar to them and the industries in which they operate.

So for our new Voice of the Enterprise (VotE) AI and Machine Learning survey and report, we wanted to find out what was really happening in four industries in particular: financial services, healthcare, manufacturing and retail.

We asked respondents in each of those industries what their top use cases were now and what they thought they would be in two years’ time. What we found was extremely interesting and indicative of the trajectory of AI and machine learning in the enterprise.

Financial services companies are typically associated with use cases such as fraud detection, which did indeed come out top with 42% of respondents choosing it as their top AI use case. But over the next two years, more complex use cases such as digital and data security (38%) and compliance (37%) will become more of a focus for AI investments than fraud detection (35%).

In healthcare – the industry that I think has the biggest upside potential of radical transformation using machine learning – as the image above shows, hospitals are really focused on clinicians and how they do their jobs. Sure, the headlines might about cancer treatments, but overall more efforts are focused helping clinicians do their day to day jobs more efficiently.

In manufacturing, the top two use cases, which are maintenance forecasting (36%) and supply and demand forecasting (30%) really speak to the value of the machine learning can bring in terms of making predictions about complex systems.

And retailers understand their focus needs to be on customer engagement but also on complex supply chains benefiting from humans augmenting their work with AI. In the future, we expect more focus in retail to come on areas such as payment processing and data security.

So that’s just a small snapshot of all the data we have in our latest voice of the Enterprise AI machine learning survey. For more on the survey and some of its findings, click here and I made a short video about the same, which you can findhere.

The Kurian case for Google & machine learning

Google’s recent Let’s Talk AI event in SanFrancisco looks like being the last major customer event led by Diane Greene, as we recently found out that Google has hired Thomas Kurian, an Oracle veteran that left that company over the summer. He joins Google November 26, before taking the reins from Greene in January.

A lot has been made of the upcoming culture clash between Kurian, who spent 22 years at Oracle, a stalwart of 1980s and 1990s Silicon Valley culture and Google, one of the modern-day SiliconValley icons. Such a clash is real, but not insurmountable, in my view.

But bringing Kurian on board – just as it was with Greene before him – signals Google is deadly serious about the enterprise market.

Kurian hasn’t even started at Google Cloud yet – let alone taken over the running of the place – but it’s clear that machine learning will be critical to Google’s prospects in the enterprise market.

This is demonstrated by our recently-published Voice of the Enterprise AI & Machine learning Survey, which reveals the headway Google has been making headway under Greene.

This slide shows the contrast when our survey respondents in the 451 Alliance are asked which IaaS cloud provider they would choose – (indicated in blue), versus which one would they would choose specifically for machine learning, in green. It demonstrates that machine learning is a key strength for Google and we expect Kurian to build on Greene’s legacy at Google Cloud in this respect.

Returning to the Let’s Talk AI event in San Francisco Google had a lot of customers talking up its offerings, including Keller Williams, Meredith, Ocado, Siemens and Total, among others. Keller Williams showed how real estate agent walk-throughs in a house can be quickly turned into a usable asset using AutoML & speech to text. Meredith showed how it is using Google’s natural language API and AutoML to cut the time spent building taxonomies for some of its properties from years to months. And Ocado showed off its robots used in its distribution centers, the next generation of which will have Google Tensor Processing Units (TPUs) embedded in them, which will be used in the 20 warehouses the company is building for Kroger over the next few years.

Also crucial to Google’s enterprise applications are its machine learning applications, which are not fully fledged business apps, but sit below partner’s apps to make them more intelligent, such as Google Cloud Contact Center AI, for which partner Genesys reported an “avalanche of demand”.

Both Ocado and Total cited cultural alignment with Google as crucial to them choosing to work with Google Cloud – something I’m sure Kurian won’t want to mess with too much. 

So although Greene won’t be the person to fully realize Google Cloud’s enterprise ambitions, she is passing on a strong foundation to Thomas Kurian. 

I summarized some of these thoughts in a short video, which you can see on YouTube here.

What is the Voice of the Enterprise saying about AI & Machine Learning?

Today marks the official launch of 451 Research’s new survey products focused on AI & Machine Learning, called Voice of the Enterprise (VotE) AI & Machine Learning: Adoption, Drivers and Stakeholders 2018.

I made a video explaining more about it, which you can see via Twitter here

I’m very excited about this launch because while it’s great to be doing research on a daily basis in this space – talking to end users, technology vendors, service providers and investors about what’s happening – having data that demonstrates what enterprises are actually doing and plan to do in the future gives us a foundation for all the other research to build upon. 

VotE AI & Machine Learning is currently a semi-annual survey – so two per year – but if it proves successful (and it already has done with forward-thinking 451 Research clients) I’m hoping we can expand it to a quarterly survey, focusing on different themes. Right now the themes are use cases and business benefits across multiple vertical markets, including financial services, retail, healthcare, and manufacturing in one survey and then a deep dive into the changes happening in infrastructure, platforms and tools to enable the adoption of AI and machine learning, both on-premises and in the cloud.

So what is the Voice of the Enterprise telling us? Well, among other things, the following:

  • Although 17% of respondents say they have already implemented machine learning (ML), we are still at the very early stages of ML deployment.
  • Analytics of various types compose the technology’s primary use case, though security is close behind.
  • Larger enterprises are further ahead in terms of adoption; having resources matters in ML.
  • The biggest barrier to adoption is a shortage of skills. I wrote about that recently and what some vendors are doing about it. 
  • Data gathering and preparation is often also cited as a barrier to adoption. Therefore, organizations are focusing initially on customer data as their primary input, as that’s where they can have the most direct, positive impact using ML. 
  • There is plenty of support from senior management in terms of both influence and ultimate spending decisions, which reflects the strategic nature of ML. 
  • Large cloud vendors have a major ML foothold, though Google and IBM have a much stronger presence in ML than they do in overall cloud adoption.

You can find out more about VotE AI & Machine Learning here

For Google, the biggest barrier to AI is its biggest opportunity in AI

At the recent Google Cloud Next conference, the company’s Google’s enterprise division made a plethora of announcements, the details of which are here and there are links to our analysis of it all at the end of this post.

One of my key takeaways was that Google sees the biggest barrier to machine learning as its biggest opportunity. What do I mean by that?

It’s commonly understood that lack of skills or talent – and specifically data science talent – is usually first or second on people’s lists of biggest barriers to implementing machine learning. Our newly-launched Voice of the Enterprise: AI & Machine Learning survey confirms this, with 36% of respondents citing lack of skills as the biggest barrier to implementing machine learning, with lack of data or lack of budget both coming in a distant joint second. 

This talent gap means that AI and machine learning remain something of a mystery for some organizations and they will find themselves falling behind. Google sees its AutoML tool as the answer.  AutoML is the spearhead of Google’s quest to expand the number of people that can build and train ML models without data science skills. Having launched AutoML for computer vision earlier this year, attracting interest from 18,000 customer organizations, Google is now adding natural language and language translation so users can create models using images, text or translation and publish them as APIs for use in applications.

One of the key things to understand about AutoML uses machine learning itself to choose the best model for the data uploaded by the user. Choosing a model isn’t all there is to machine learning – far from it – but it is a key step along the way. If Google can automate that, it will greatly expand the number of people that can use its machine learning and thus attract many more users to the Google Cloud Platform.

Some recent 451 Research from Google Cloud Next (451 customer login required):

Google Cloud Next: the ‘excited teenager’ reaches adulthood

Google Cloud Next: The new GCP IoT app embodies the Google ‘analytics first’ mantra

Google Cloud Next: G Suite building momentum with AI-enabled, cloud-native collaboration

Google implementing ‘100% partner attach’ strategy to win enterprise business

Microsoft’s need to make its voice louder in conversational AI drives Semantic Machines buy

Microsoft’s need to make its voice louder in conversational artificial intelligence (AI) has driven its acquisition of Semantic Machines, announced this week. Semantic Machines focuses on understanding conversations, not just phrases, to support full conversations in voice or text. It has speech recognition technology and natural language generation (NLG) technology to communicate with the user in the right context. The technology is language independent, uses deep learning and reinforcement learning and the company had been building a large-scale training corpus for spoken and written dialogue.

In addition to no longer having its own mobile OS, Microsoft hasn’t pursued the consumer smart speaker model like Amazon, Google and, most recently, Apple have (although it does have partners making such devices using its software, such as Xiaomi). All this leaves Microsoft in need of other ways to find an edge in the conversational AI market.

However, Microsoft has certainly been making plenty of research breakthroughs in the area (see photo top right), such when a team at Microsoft Research Asia in Beijing reached the human parity milestone using the Stanford Question Answering Dataset in March and in April, when it claimed to have enabled full duplex conversation with XiaoIce, its AI-powered chatbot that is popular in China (see photo on lower right for a demo of XiaoIce at a Microsoft event in London I attended this week). Google got lots of publicity when it recently showed something similar at its I/O conference, in the limited domain of booking a hairdressing appointment.

The fact that both those Microsoft research breakthroughs came in China might have had something to do in part at least with the decision to buy Semantic Machines, based in Berkeley, CA, where it will now establish a conversational AI center of excellence.

Another reason was the talent, including Larry Gillick, former chief speech scientist for Apple, working on Siri, UC Berkeley Professor, Dan Klein, and Stanford University Professor, Percy Liang, who created the core language AI technology behind Google Assistant. Semantic Machines CEO and co-founder Dan Roth also started VoiceSignal Technologies, which was acquired by Nuance Communications for $293m in May 2007. Gillick worked at Nuance, VoiceSignal and Dragon Systems for almost 25 years, so he and the others are steeped in how AI can help us both understand and communicate with humans using AI and machine learning.

At 451, we’ve got AI & machine learning covered

I started covering machine learning in earnest at 451 Research way back in 2001, focusing initially on machine learning-driven text analytics used for national security applications.

Most of the companies I covered back then are gone via acquisition or expiration, but there are now orders of magnitude more to cover. I cannot list all the companies my colleagues and I cover here, but what I can do is give you a flavor of our coverage by showing you a list of just a selection of our reports that are focused all or in part on AI & machine learning. Most of these are for clients only, but you can see summaries of all of them.

Some are long format Technology & Business Insight reports, some are part of our daily-updated Market Insight service.

The main one is the Current and Future State of AI and Machine Learning that we published in March. But here some others….

General

4Sight: As intelligence becomes pervasive, data becomes the ultimate asset (March 2018)

Compute prices stabilize as competition shifts to machine-learning resources (March 2018)

How online retailer Ocado uses AI and machine learning to drive and protect its business (March 2018)

The streets of machine learning are paved with gold, but you need a map to get there (January 2018)

AI World conference: Tackling the AI knowledge gap (January 2018)

At AI World, plenty of real-world use cases for AI and machine learning (January 2018)

The huge promise of unsupervised machine learning in AI (December 2017)

AI, machine learning and the strategies of major software vendors (August 2017)

Three key takeaways for the enterprise from Google I/O 2017 (June 2017)

Robotic process automation: How it works, how it’s used and the vendors doing it (April 2017)

Rise of the machines, Part 1: Has the labor market benefited from technology? (Jan 2017)

Rise of the machines, Part 2: AI and the labor market (Jan 2017)

Machine-learning-based analysis and objectivity: an uneasy pairing (Jan 2017)

Machine learning: past, present and yet to come (Dec 2016)

Data Platforms and Analytics

AI for BI: A potentially harmonious marriage, but not yet problem-free (April 2018)

Total Data market projected to reach $146bn by 2022 (March 2018)

Analytics in the age of the algorithm: Beware the machine-learning ‘black box’ (March 2018)

How to be data-driven: a guide to the importance of cultural and organizational change (January 2018)

2018 Trends in Data Platforms & Analytics (November 2017)

Total Data: Platforms & Analytics (November 2017)

Machine-assisted corporate performance management: great potential and poised for takeoff (October 2017)

Mapping the artificial intelligence-based analytics landscape (September 2017)

Data Management and Analytics Market Map 2017 (August 2017)

In the land of the data giants, the big look set to get bigger (June 2017)

Machine-learning analytics: an M&A bonanza, with more to come (June 2017)

451 Research predicts Total Data market to reach $138.5bn by 2021 (May 2017)

Big data, machine learning shape performance-monitoring developments (February 2017)

Information security

Machine Learning Signals a New Analytics Era in Security (December 2017)

2018 Trends in Information Security (December 2017)

Customer Experience & Commerce

AI brings the dawn of the new customer intelligence platforms (March 2018)

Vendors look to AI for next-generation communications and collaboration (December 2017)

Money20/20 preview: five payments trends to watch for (October 2017)

Dawn of a new era: Why systems of engagement require systems of intelligence (June 2017)

IoT

Crossing the intersection of connected cars and commerce (March 2018)

Digital twins play well in the growing family of IoT terminology (January 2018)

Connected Health Conference 2017: patients, providers and clinical adoption (November 2017)

SAE event highlights the shift of advanced driver-assistance systems to automation (October 2017)

The Changing Automotive Industry: Smarter, More Connected and Increasingly Autonomous (April 2017)

The rise of the data-driven application for business intelligence (Mar 2017)

Big data, machine learning shape performance-monitoring developments (Feb 2017)

The three keys to service provider growth: automate, automate, automate (Feb 2017)

Chips

Silicon revolution: New accelerators to power AI and machine learning – Part 1 (Feb 2018)

Silicon revolution: New accelerators to power AI and machine learning – Part 2 (Feb 2018)

Google details the what, why and how of its Tensor Processing Unit for machine learning (May 2017)

Data centers & critical infrastructure

Which technologies will disrupt the datacenter? (February 2018)

 

The Current & Future State of AI & Machine Learning

My new report from 451 Research – ‘The Current and Future State of AI and Machine Learning‘ brings together the key points about AI & ML I’ve been writing, speaking to clients and presenting about for the past two years.

If you agree that technology adoption – like so many other trends – takes the form of an S curve then I think it’s worth asking ourselves, where are we on the S curve of adoption for AI and machine learning? It’s impossible to know for certain of course, but my somewhat educated guess is that we are very early. Something like this, with the orange bar indicating our current progress:

 

Now the point of this bit of cod-science isn’t to spark a debate as to whether we should be a few millimetres to the left or right. Rather, it serves to demonstrate that we’re early in the evolution of machine learning and its use may be barely perceptible to some – even those in the technology industry. That’s because a lot of use cases of machine learning are very narrow.

For machine example, machine learning is used to improve the accuracy of look-alike modeling in customer journey analytics. It is used to analyze user behavior for information security purposes. And it also performs automatic password resetting in customer service situations. None of those is earth-shattering and none of them also mean we’re just one algorithm or model from the so-called singularity when ‘the machines’ supposedly take over.

So at first on an S curve, things happen slowly often change (or adoption) may be imperceptible. But then things start to change quickly, then very quickly, then it slows down until it becomes almost constant.

An S curve can describe how ice melts, water evaporates, the expansion of the early universe, the fall of empires and yes, the spread of new technologies, as Everett Rogers demonstrated with his theory of the diffusion of innovations which described how and at what rate new ideas and technology spread.

I believe we’re at the very early stage of adoption and development of practical AI and machine learning and that there is so much more to come.

The majority of the report’s focus is on use cases, such as customer experience, supply chain analytics, information security, human capital management, Internet of Things (IoT), marketing automation and application performance management. And in vertical markets, we look at machine learning’s use in:

  • Financial services
  • Healthcare
  • Manufacturing
  • Retail
  • Travel & Hospitality
  • Agriculture

And to round it off we have machine learning-focused profiles of 12 major software vendors, plus mentions of numerous startups throughout the report and a section on some of the latest innovations in machine learning.

To download an executive summary of the report and to find out more, click here.

India’s visible infrastructure – connecting the unconnected

I spent last week in India, first at Nasscom Indian Leadership Forum (ILF), in Hyderabad which was this year combined with the World Congress on IT (WCIT) and then a couple of days in Mumbai including visiting the TiE Global Summit. India is going through massive change while maintaining strong economic growth. Just this week India has overtaken China to become the world’s fastest-growing large economy again, reporting fourth-quarter GDP up 7.2% again the same period in 2016, although growth on average through the year was nearer 6%.

It’s a cliche for people from the west to visit India (I’ve only been there three times) and see potential everywhere and project a bright short-term future only to be disappointed when they return and things look the same. But the signs of change are everywhere and have been there for some time. Walking around Mumbai, you can see the ongoing construction of the new metro system, part of which has already opened. The tagline on the boards protecting pedestrians from the construction – ‘Connecting the Unconnected’ struck me as an unusually frank and appropriate one. Infrastructure connects people, enables them to get them to work efficiently and to do their jobs effectively. Infrastructure matters as we know, but it’s a while before it becomes invisible. What do I mean by that?

Invisible Infrastructure is one of the four themes of 451 Research’s 4Sight project, our vision of where the world, powered by technology is heading over the next decade. This is as true of India as it anywhere else in the world. Major change can happen when people have infrastructure that just works, without them having to think too much about it.

But infrastructure takes many forms. There’s the physical kind such as the metro or even Richard Branson’s proposed Hyperloop from Mumbai to Pune.

Then there is legal and financial infrastructure. India’s legal system certainly has issues but it could prove a major advantage for it in the long term versus China.

Financial infrastructure has been one of the main focus areas of Prime Minister Narendra Modi administration, including the sudden withdrawal of two major bank notes in November 2016 due to corruption and tax avoidance. And the Aadhaar identity system is providing an infrastructure for all sorts of innovative financial products; once again, connecting the unconnected.

And technology, of course, plays a massive part in this – as it has done in the changes in India over the last 30 years or so. In his address to the Nasscom audience (via video link to avoid a massive security scrum at Nasscom), Modi launched Nasscom’s Future Skills platform that seeks to train 4 million existing and new IT workers in eight technology areas, including machine learning/AI, virtual reality, robotic process automation, IoT, big data analytics, 3D printing, cloud computing, social and mobile.

Technology is also key to enabling India’s 200 million small rural farmers to make a decent living from their land, where apparently 65% of them have less than one hectare. A session at TiE in Mumbai on the subject was packed solid and the audience rushed the stage at the end to network with the presenters and swap cards. I found the proposition of Happy Roots particularly interesting.

Just this week India has overtaken China to become the world’s fastest-growing large economy again, reporting fourth-quarter GDP up 7.2% again the same period in 2016, although growth on average through the year was nearer 6%.

There’s still plenty to do – I found in Hyderabad for instance that Uber barely worked for me, and neither did my mobile, which steadfastly refused to move beyond 2G, although colleagues of mine had a better experience on other networks. 

But when you see the business and technical talent up close in India and you think how powerful that can be once it’s easier to do business in India as its infrastructure becomes invisible, it is incredibly exciting.

Where is the AI Capital of the World?

It’s common for politicians and business leaders to proclaim that their country is, or is going to be – the market leader in this or that. The US is famous for more or less every city claiming to be ‘The [fill in the blank] Capital of the World’. The latest innovation to be the subject of such proclamations, is, of course, AI and machine learning.

What does it actually mean to say a country is a leader in something when that something isn’t physical or tangible, i.e. not counting the number of cars manufactured or the number of oranges exported each year?

With AI that isn’t clear, but metrics such as the volume of VC investment in UK startups, the number of relevant PhDs obtained, the market capitalisation of AI companies (OK, perhaps it’s a bit early for that one) are some bandied around and probably point to an answer.

The 11th London.ai event at the end of January, held this time at one of Google’s offices, demonstrated the vibrancy of the London AI community. What was striking – besides four great presentations – was the diversity of the presenters. They comprised Noor Shaker , CEO of GTN.AI was educated in Syria and Denmark and is now based in London; Julien Cornebise, a London-based French national, former DeepMind employee and now director of research at Element.AI and head of the Canadian company’s recently-opened London operation; Florian Douetteau, CEO of Dataiku based in Paris and Andrew Trask, a PhD Student at Oxford, who gave a fascinating presentation on the work of OpenMined, and decentralized AI. 

The UK AI communities in both London and other majors centres – notably Cambridge – rely on the best of the best, regardless of their nationality. That’s what can make somewhere truly a capital of the world. It also vividly demonstrated the tragedy and rank stupidity of a notion such as Brexit, which if it happens as some would like, would eventually relegate the UK to the bottom rungs of any rankings ladders, of that I’m sure.

That same week I visited a very different setting. The All-Party Parliamentary Group on Artificial Intelligence (APPG AI) has been meeting since January 2017. The latest meeting was focused on data and its usage within the context of AI.

The incongruity of the faded mock-gothic surroundings of Committee Room 1 in House of Lords, where 162 people crammed in talking about technology that hasn’t really taken off yet was palpable. The committee heard from a couple of vendors, an academic but the main draw – in my view anyway – was Elizabeth Denham (who like the Governor of the Bank of England, is Canadian), the UK Information Commissioner and head of the Information Commissioner’s Office. Speaking four months before GDPR becomes law, she talked about privacy and the accountability GDPR imposes on organizations. A good roundup of the proceedings is here.

There are many predictions around of how much a prominent position in the AI global community could give to the UK economy – some bullish and others distinctly bearish. But whatever path it takes, we must do what we can to keep encourage entrepreneurs, investors and researchers into AI and machine learning to view London and the UK as a suitable location for them to explore their ambitions. To do anything to discourage all-comers would be madness.

 

AI World roundup

I just got back from a productive three days at AI World in Boston this week. Those are some quick thoughts as I work on a more thorough report for 451 Research clients.

About 2,200 people gathered to hear about use cases, technology and business models for AI/machine learning (ML). It was a diverse group at least in terms of levels of understanding of AI and machine learning, if not in terms of gender. I attend a lot of technology conferences and most are majority-male, this one must’ve been 80% male. About 20% of attendees came from outside the US. But those that were there were eager to engage, learn and discuss. 

And what’s driving that interest is, of course, the huge potential new revenue and new costs saving there are to be made. Heath Terry, an MD at Goldman Sachs had some startling numbers to hand, such as annual cost savings in healthcare of $54bn by 2025, or the same number in retail coupled with a $41bn increase in revenue by 2025. And the energy market would enjoy cumulative cost savings in the seven years to 2025 of $140bn. Terry also said he’d never seen a market getting this sort of level of investment that has not spawned new, large companies.

The startup lightning round on the evening of day one – judged by an all-female VC panel of three, incidentally – had some interesting pitches from Avata IntelligenceBI Brainz, Clickworker.com, Flamingo.ai, indico.io, Synaptik and ZyloTech with software aimed at solving issues such as customer analytics and assistants, process automation, and something called AI-as-a-service, among other things.

One small criticism would be that panellists need to think about the conference as a whole and the likelihood that the audience has already heard multiple – and sometimes conflicting – definitions of what AI, ML and deep learning (DL) are so we don’t need another discussion about definitions when we could be discussing use cases, for instance.

Veritone CEO Chad Steelberg had an intriguing proposition that we’ll cover soon in our research at 451 – and a right to claim to be the first publicly-traded pure-play machine learning software company in the US markets.

I had the honour of closing out the conference with chair Eliot Weinman and chief scientist of Narrative Science Kris Hammond on the daunting subject of the Future of AI. I had to quickly gather some thoughts together so I chose brief comments focused on:

  • More data acquisition & consolidation by large vendors (think how Oracle Data Cloud was formed, or IBM-Weather Company or Microsoft-LinkedIn) as data is the feedstock of machine learning and large application vendors infusing their stacks with machine learning will want more of it
  • More focus on how human brains actually work – neuroscience and computer science folks trying to figure out if neural nets do actually work in a similar way to the brain. Nobody seems to know right now
  • More specialist AI chips – Apple, Facebook, Google, Graphcore, Intel, Microsoft, Nvidia, Qualcomm…..some are making them, some are merely designing them for others to make, but all are focused on increasing the performance by baking functions into the chips, especially for the increased requirements of deep learning
  • Government involvement will increase – both in a regulatory sense and promoting national interests sense
  • Unsupervised machine learning – because there are only so many labeled datasets around to train models. I explored this further in a recent 451 report you can read here.

Earlier in the day I had three excellent speakers on my panel about driving innovation in the enterprise: Martin Mrugal, Chief Innovation Officer of North America, SAP; Casimir Wierzynski, Senior Director, Office of the CTO, in the Artificial Intelligence Product Group, Intel and Scot Whigham, Director Global IT Service Support, InterContinental Hotels Group (IHG). Given we were on day three and a lot of panels about various aspects of AI/ML had preceded us, I wanted to focus as much on innovation as on machine learning. So we were 15 minutes into our allotted 45 before we steered the conversation back to the main topic of the conference but we got good feedback from the audience on what was said about how SAP and Intel foster innovation at large tech vendors and how it’s done at IHG.

AI World next year is expanding to the Boston Seaport Hotel & World Trade Center in Boston December 3-5. Mark your calendar.