Tips to Building a Successful API – Part 3

Featured

If you haven’t read part 1 and part 2 this API series, check the a links on the right 🙂

A successful API is more than a feature; when you view your API as a product, it can be an enabler of your business strategy. Part of the magic of APIs is that creative developers find uses that the API designers never envisioned. If APIs are well designed and easy to use, this can be an enormous benefit and opportunity, turning your service into a platform that can grow in many ways.

A successful implementation encourages developers to use it and share it with others, creating a virtuous cycle where each additional successful implementation leads to more engagement and more contributions from developers who add value to your service. A great API can help you grow an ecosystem of employees, customers, partners who can use and help you continue to evolve your API in ways that are mutually beneficial.

But the promise of APIs can only be realized when target consumers begin to use them. For internal developers, APIs introduce a new way of working and one that will require some buy-in. In addition, internal developers won’t use your API if they don’t believe it’s the best, most efficient way to achieve their goals. ell designed APIs that are easy to use will encourage adoption by internal developers, paving the way to a better-defined, more consistent and maintainable approach to development.

For public APIs, the situation is even more competitive. An ever-increasing pool of APIs is competing for developer’s attention, making the design and ease of use of your API critical to its adoption and ultimately, its success.

Unfortunately, too many API providers build theirs before thinking through the critical success factors, resulting in APIs that fail to meet business objectives. Delivering a great API isn’t hard if you follow a few proven principles. In this post, I’ll demystify some strategies by reviewing the 4 tips of a successful API.

Tip #1: Design for successful APX (API User Experience)

To deliver great APIs, design must be a first-order concern. Much like optimizing for UX (User Experience) has become a primary concern in UI development, optimizing for APX should be a primary concern in API development. An optimal API design enables applications, developers, to easily understand the purpose and functionality of the API so that they can quickly become productive using it. It also allows organizations to focus on getting the design right before investing in back-end implementation, which
is time-consuming and expensive to undo if design issues aren’t identified until after implementation.

The best way to design an API that developers want to use is to iteratively define the structure in an expressive manner and get feedback from developers on its usability and functionality along the way. The API Designer is an example of this concept in action. The API Designer is an open source design environment that leverages RAML, the RESTful API Modeling Language. The API Designer provides an editor for drafting the APIs structure while rendering in real-time an interactive console to enable interaction with the API.

As the API is designed, application developers can interact with it and test its behavior, thanks to an integrated mocking service that returns the values a call to the live API would produce. Because APIs designed in RAML( RESTful API Modeling Language) are concise and easy to understand, application developers can rapidly assess the APIs functionality and usability and offer concrete feedback on ways to improve it.

Tip #2: Optimize for use case scenarios

There is no such thing as a one-size-fits-all API. Even for the same underlying service or set of services, multiple APIs might be required to support different types of users and use cases. An API should be optimized to fulfill a specific business request in a specific context. Too often APIs are modeled after the design of the backend services or applications they expose instead of the use case they fulfill. This results in poor performance of the client application, poor user experience, and ultimately, poor adoption.

To optimize your API for a specific use case, think about how coarse or fine-grained it should be. For example, if you’re designing an API to enable access to sales order status from a mobile device, you need to consider the constraints of that use case. A mobile application has a higher sensitivity to number of network trips, latency and data size than a web application so this API should be designed to limit backend calls and minimize the size of data returned.

In addition, this use case is fairly granular – the API will lookup an order based on the order number and return a status. Therefore, the API should expose this specific fine-grained functionality so it can be invoked independently.

If the underlying service it accesses is coarse-grained and you anticipate building
additional APIs on that service to address additional use cases, consider a tiered approach. Expose fine-grained services that users can access directly, and add coarse-grained services on top of them to support broader use cases. Users can choose to call the fine-grained APIs directly or if they need the combined functionality of multiple fine grained calls they can use the coarse-grained APIs. This API designed in API Designer is an example of an API optimized for this case.

Tip #3: Provide super easy access

Finding an audience for your API begins with publishing it to a portal that allows developers to discover your API so they can begin evaluating it for their use case. The developer portal should include all of the tools developers need to learn about and begin using your API. Developers reviewing your API will only invest a few minutes before deciding whether or not to continue; having information available in a clear and easy-to-consume format will encourage them to stick around rather than go elsewhere. Developers will quickly scan your documentation to get an overview of its functionality then zero in on what it will take for them to get up and running.

From there, they’ll quickly want to see some examples and ideally, start interacting with the API. Developers are far more likely to use an API that includes interactive documentation that allows them to interact with the API over static pages that only allow them to read about it.

The API Portal includes interactive documentation that not only describes the endpoint but also the fields required to call that API and the data that is returned. In addition, you can add code samples to give developers a head start in building the code to access your API in the applications they build. Finally, the Console includes “try it” functionality that allows developers to interact with and test the API. During the design phase before the API has been implemented, a mocking service allows developers to test the API’s behavior and see the resulting body a call to that API would produce. Once the API is implemented, developers can test live.

Tip #4: Build an ecosystem (community)

apiThe application developers who consume your API are not just your customers; they are the ecosystem that will drive your success. Treating them as valued members of your community can drive significant mutual benefit An obvious benefit of a thriving developer community is a wider reach for your API.

To support the organic growth of your API, your developer portal should include an easy way for developers to share knowledge with each other. The Notebook feature of the API Portal demonstrates this concept in action. It allows developers to document new uses they discover for your API to grow the addressable market for your API. In addition, they can share tips and tricks in forums and even add code samples to make it easy for new developers to get started quickly with your API. Finally, a valuable benefit of the community that is sometimes overlooked is that the greater the number of developers using your API, the faster bugs and issues will be identified and communicated so that you can continue to improve the value offering.

In addition, there is a great benefit in having an established communication channel with your developer community. Your API is not a static entity – as new use cases are identified and use of your API expands, enhancements and fixes are inevitable.
When you release a new version of your API, you can easily communicate the enhancements in the new version through your developer portal. You can also quickly assess who’s using each version of your API and communicate an upgrade path to them as you deprecate older versions. Finally, understanding your
developer community and having accurate insight into use cases and patterns provide invaluable knowledge that you can use to enhance your API over time.

Summary

APIs are becoming ubiquitous as their potential to transform business is becoming widely recognized. But delivering a successful API program that achieves defined business objectives requires a systematic approach to designing and managing APIs. Great APIs aren’t difficult to develop if you design for your users and the business
processes the API will support, if you make it easy for developers to find and consume your API, and you actively manage your API developer community as an extension of your business.

Until next time, Rob…

“Breaking Bad” on APIs – Lessons Learned – Part 2

Featured

If you haven’t read Part 1 of this series, click here.

APIs

Organizations often decide to build an API without fully considering key success factors or without first engaging their stakeholders. I saw this first hand at a previous employer and it is very painful, not just for the company, but for the consumers of the API, which can have long-lasting repercussions.

In either case, the risk is that the API does not fit the needs of its API consumers. And APIs that don’t fit the needs of consumers have a high cost: limited adoption by developers and ultimately, a failure to meet business objectives. Once the API is designed and built, undoing these mistakes is difficult and time-consuming.

In most cases, you must start over again, redesigning a new API, implementing it by connecting to backend services, then rolling it out again to the developer community. Worst of all, you will have to transition all existing users to the new API. This will require additional work which they may not have the time or willingness to do. At that point, you’ll be faced with a tough choice – continue to support the original API and its users until they eventual (hopefully) migrate, or shut it off and potentially alienate and lose those users.

APIs

Another common pitfall of API programs is allowing the design of your API to be dictated by the constraints of internal systems or processes. This is never a good idea, but is particularly perilous when the backend functionality lives in legacy systems whose data schemas are overly complex or whose business logic has been extended over the years using hard-coded workarounds and convoluted logic. Exposing this kind of dirty laundry to your API consumers is a recipe for failure. APIs modeled after internal systems are difficult to understand and use and developers simply won’t invest the time it takes to become productive with them.

What you need is an API that is simple to understand and easy to use. Developers should be able to assess the functionality of your API and start using it in just a few minutes. The only way to deliver that kind of ease of use is to design for it upfront.

Next up, “Tips to building a successful API”

Until next time, Rob

Not all APIs are Created Equal – Overview – Part 1

Featured

To continue on from my last post on API’s and their Business Value I did a few years ago.  I thought I would write an updated post on API’s (Application Programmer Interfaces) do a little deep dive. APIs have had a big impact on my last role and bigger impact on my current role as a PM @ 5nine Software, and thought I would share my knowledge and research so far. APIs are not scary 🙂

APIs are not new. They’ve served as interfaces that enable applications to communicate with each other for decades. But the role of APIs has changed dramatically in the last few years. Innovative companies have discovered that APIs can be used as an interface to the business, allowing them to monetize digital assets, extend their value proposition with partner-delivered capabilities, and connect to customers across channels and devices.

When you create an API, you are allowing others within or outside of your organization to make use of your service or product to create new applications, attract customers, or expand their business. Internal APIs enhance the productivity of development teams by maximizing re-usability and enforcing consistency in new applications. Public APIs can add value to your business by allowing 3rd party developers to enhance your services or bring their customers to you. As developers find new applications for your services and data, a network effect occurs, delivering significant bottom-line business impact.

For example, Expedia opened up their travel booking services to partners through an API to launch the Expedia Affiliate Network, building a revenue stream that now contributes $2B in annual revenue. Salesforce released APIs to enable partners to extend the capabilities of their platform and now generates half of their annual revenue through those APIs.

Next up, “Breaking Bad” on APIs – Lessons Learned”

Until next time, Rob.

API’s and their business value…

API

In the past few months, I have been focusing a lot of my time around the development of our Nutanix Ready Integrated Program which deals with partner solutions leveraging\cosuming our API (application programming interfaces).  After a lot of research on API programs and consumption patterns, I thought I would share my thoughts and some conclusions on the business side. Not sure if this is considered a blog post or just ramblings, but here we go.

API and DATA

Data is, in many ways, is one of the most valuable assets a business has. A growing number of consumers and businesses are incorporating web and mobile apps into their daily routines, and companies are using data to provide more personalized, tailored experiences to their customers. In addition, companies are analyzing customer and operational behavior to make better decisions. These are some of the valuable new uses for previously isolated data sources.
APIs have emerged as the most accessible way for consumers within the business to extract value out of that data; developers can use them to create new business opportunities; improve existing products, systems, and operations; and develop innovative business models. Analysts can grab new data sources more quickly and pull the data into their analytics platforms. As the keys to unlocking precious enterprise data, APIs need to be combined with enterprise connectivity to actually free the data from systems. The APIs is the piece that makes the data consumable and reusable, thus they become ever more valuable to business.

API Evolution

As more and more APIs come into use, the architecture underpinning them needs to evolve as well – organizations cannot simply attempt to deploy APIs on top of existing monolithic systems and processes and expect an overnight transformation. Rather, the transformation begins with initiatives targeted at new innovative directions for the organization, such as the embrace of microservices, mobile apps, and laying the groundwork for a world of connected sensors. Also, product companies should consider making API framework a key part of their design strategy which would enable end users to adopt their product more rapidly and aggressively.  And above all, embracing APIs will help ensure that these connections are made intelligently and efficiently.  With all of this, I’ve seen there’s a direct connection to business value as well – generating revenue is considered the most important value that APIs provide to the business.
While revenue generation is an important part of the story, the impact of APIs goes much further into organizations, enabling transformation and agility at many levels. APIs enable enterprises to deploy apps quickly, in a repeatable way, which leads to a faster pace of delivery, and the ability to create new and innovative experiences quickly. In addition, APIs can greatly reduce the cost of change, enabling IT and application owners to change apps with minimal impact – especially when there are numerous back-end integrations involved. This is critical to agility since for the most part, the pace of change of the front end applications is much faster than in the back-end applications. APIs also help enterprises achieve operational efficiency, enabling greater visibility and expanded capabilities since every API call from the mobile app to the backend system is tracked and traced through an API key.

What are some examples?

For those who are not familiar with API, some examples are API are like SOAP or  REST. Nutanix uses REST (representational state transfer) based API, and we allow partners and customers to build leverage our API to do some very cool things.  From VM monitoring to solution orchestration, the possibilities are endless.
API
For example, Comtrade, a Nutanix Partner has developed a System Center Operations Manager Management Pack for Nutanix.  It leverages our API to pull metrics into SCOM to correlate with application workloads into a single pane of glass.  In this scenario, an IT Pro can really understand where his bottlenecks are and take action or automate that action.  Now that is the power of API with some DevOPs mixed in!!
To summarize….businesses from every industry are using APIs to add additional value, from increased revenue to increased agility to improved customer experience. Extraordinary changes are taking place in the enterprise which necessitates a new organization and philosophy for utilizing technology.
In a future blog post, I plan to go into the technical aspects of API and use cases.

Until next time, Rob… 🙂

Microsoft Azure Stack Technical Preview finally sees the light….:)

AzureStackIntro Azure Stack
Change is in the air! I know that phrase is associated with spring, but I love the change of seasons, especially, winter, when days get shorter and I get to spend time in the snow with my kids. Every winter, I think I can rely on the patterns from the seasons before, but I quickly find I have to adapt to a new reality. For example, I live near Boston and just when I thought we would have a mild winter, mother nature strikes. One week its 50’s and the next we are in the middle of a blizzard. Changes and transformations are just another fact of life.

Below is a pic of the latest storm 2/8/16.
storm020716 Azure Stack

IT Disruption

IT is going through a similar transformation. Over the last few years, there has been lots of buzz on the transformation in the industry to Hypverconverged and how that fits with cloud computing. The traditional model of IT is evolving to make way for agile, service delivery. Business units in pursuit of agility are looking for self-service approaches, with the promises of reliability, availability, scale and elasticity. This has been driving flight to the public cloud where developers and business units are going around good IT practices in order to innovate – often introducing risks to their companies that they were never held accountable for in the past and are not equipped to deal with today. In 2015, 40% of IT spending is occurring outside the IT organization, up from 36% in 2014 according to Gartner. There is a large opportunity for Corporate IT to embrace the new patterns as an alternative to “shadow IT.”

Harnessing the Change

Corporate IT is still responsible for the impact applications make on a company’s operations and, often times, apps can’t move to a public cloud. Traditional IT makes large investments in datacenter hardware for scale, reliability and availability. Control of physical access & security, change configuration and bandwidth & latency minimize risk in the infrastructure. Yet these controls are not only expensive, but can also slow down innovation. Corporate IT needs to evolve to create private and hybrid cloud offerings that can support both traditional and cloud-born application models. There is a huge opportunity for IT to embrace and support the business transformation and improve business efficiency.

If you deconstruct Azure, or any public cloud, at the heart is a world-class datacenter with managed servers, storage and networking. Having a datacenter that is build on web scale methodologies is key. Azure Cloud, Amazon, Facebook all understand this. Operations and automation give the private cloud its heartbeat, as clouds require tight integration of servers, networking, storage and the OS. This is similar to the traditional physical datacenter you run today, but with Nutanix it is in a much smaller footprint, more efficient and agile datacenter. And while this infrastructure can reduce hardware costs and provide elasticity, and virtualization can help with mobility, it is the services and new development patterns that make it a hybrid cloud. A hybrid cloud provides self-service capability coupled with elasticity, scalability and automated management. Where traditional datacenters with 3-tier architecture are designed to minimize access and change, the hybrid cloud in general, and Azure in particular, is designed to encourage it between on premise and Azure Cloud.

IT Transformation

This transformation begins with a fundamental change – presenting IT as a service. Traditional IT is based on classic distributed servers with strong regulation of users, limiting choice to manage risk and security. In a web scale infrastructure, most of these traditional business processes have to change to meet the customer’s desire to leverage on-demand services. One of the ways to meet these new customer needs is through next generation application support. This is where web scale infrastructure excel, providing quick application/service deployment, iteration and robust data to show business results. Moving forward, administrators need to not only control their infrastructure, but abstract applications through services providing flexibility to their business users.

Introducing New Azure Stack Technical Preview

I first learned of Azure Stack at a partner meeting just before MS ignite 15 was excited then to dive into a Technical Preview.  Finally, many, many months later, Microsoft released the first technical preview of its new Azure Stack offering on Friday for the world.

Azure Stack promises to broaden organizational access to Microsoft’s cloud services and tooling, and is aimed at organizations and service providers that can establish hybrid networks to tap Microsoft Azure services.
Getting the preview involves three steps, with downloads available at this page. There are hardware requirements to check and is limited to servers that can run Windows 2016 and support Hyper-V Virtualization. Some requirements include:

  • A dual-socket server with a minimum 12 physical cores is needed
  • About 500-750 Gigs of storage
  • A 10GB install file also needs to be downloaded.

Lastly, there are even more downloads required to support the tools and PaaS services used with Azure Stack.

Microsoft claims that with Azure Stack, it’s the only company bringing its “hyper-scale cloud environment” to organizations and service providers. Top Microsoft executives Mark Russinovich and Jeffrey Snover talked more about Azure Stack in a Web presentation on Wednesday, Feb. 3. Check it out.

Consistent Tooling

Azure Stack essentially is Microsoft’s better bridge to using its cloud services, both the Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) kinds. That’s done by bringing down its tooling to organizations. Those organizations likely are engaged with maintaining their own proprietary network infrastructures and maybe aren’t too quick to connect with external cloud infrastructures.

Microsoft’s current solution around on-premsis Azure is Windows Azure Pack, which is the supported approach currently for tapping Azure services in customer datacenters. It depends on using System Center and Windows Server 2012 R2. However, Windows Azure Pack is not as complete as the emerging Azure Stack and was Microsoft’s first attempt around private cloud solutions. Check out my series on Windows Azure Pack!
With Azure Stack, Microsoft is promising to deliver consistent APIs for developers.

That’s possible because its Azure Stack portal, a Web-based solution, uses “the same code as Azure,” according to Microsoft. Microsoft is also promising that scripting tools for management, such as PowerShell and command-line interfaces, will work across Microsoft’s Azure cloud computing services as well as local datacenter implementations of Azure Stack.  System Center isn’t required for management. Instead, the Azure Resource Manager solution is used.

Azure Stack is only available testing right now. Rollout is planned for Q4 of this year. However, the complete solution won’t all be there at “general availability” (GA) product launch. A white paper on Azure Stack, accessible via Microsoft’s announcement, showed the parts that won’t be ready at GA launch:

_azurestack_1 Azure Stack
Azure Stack services at general availability, along with services at preview (indicated by asterisks).

Breaking down Azure Stack

As discussed in a previous blog post that was written shortly after MS ignite 15, Azure Stack is a collection of software technologies that Microsoft uses for its Azure cloud computing infrastructure. It consists of “operating systems, frameworks, languages, tools and applications we are building in Azure” that are being extended to individual datacenters, Microsoft explained in the white paper. However, Azure Stack is specifically designed for enterprise and service provider environments.

For instance, Microsoft has to scale its Azure infrastructure as part of operations. That’s done at a minimum by adding 20 racks of servers at a time. Azure Stack, in contrast, uses management technologies “that are purpose-built to supply Azure Service capacity and do it at enterprise scale,” Microsoft’s Azure Stack white paper explained.
Azure Stack has four main layers, starting with a Cloud Infrastructure layer at its base, which represents Microsoft’s physical datacenter capacity (see chart).
azurestack_2 Azure Stack
Next up the stack there’s an Extensible Service Framework layer. It has three sublayers. The Foundational Services sublayer consists of solutions needed to create things like virtual machines, virtual networks and storage disks. The Additional Services sublayer provides APIs for third-party software vendors to add their services. The Core Services sublayer includes services commonly needed to support both PaaS and IaaS services.

The stack also contains a Unified Application Model layer, which Microsoft describes as a fulfillment service for consumers of cloud services. Interactions with this layer are carried out via Azure Resource Manager, which is a creation tool for organizations using cloud resources. Azure Resource Manager also coordinates requests for Azure services.

Lastly, the Developer and IT Pro Experiences layer at the top of the heap provides a consistent user interface via a Web portal. That’s done using a “consistent cloud API.” This layer also supports a common management tools use capability.
Microsoft has said, Azure Stack will “run on the stripped-down Nano Server implementation of Windows Server [2016]” and any patches or updates will happen by doing clean installations of the hypervisor and Nano Server configuration. Microsoft is still working out the update frequency for Azure Stack, and recognizes that hourly or daily updates are too often, but annual updates would be too slow.

That being said, Azure Stack will get lots of updates over the next year or so. Organizations or service providers running it should “expect to implement updates more frequently than with traditional software,” Microsoft’s Azure Stack white paper advises.
Microsoft plans to gradually add all Azure services to Azure Stack. Currently, at this technical preview, Microsoft has made capabilities available that organizations can download and deploy onto the Azure Stack Technical Preview, including an updated Azure SDK, a Web Apps capability in the Azure App Service, SQL and MySQL database resource providers, and Visual Studio support.  Microsoft has said that this first Technical Preview  represents just the first installment of a continuous innovation process planned for Azure Stack, which will eventually lead to enterprise customers being able to fully deliver Azure services from their own datacenters. However, Microsoft said that the three PaaS resource providers it has now delivered, for Web Apps SQL and MySQL, are still only at the early preview stage.

“Each service in Azure is a candidate for being distributed through Azure Stack and we will listen to customer input and consider technical feasibility in determining the roadmap,” Microsoft’s Azure Stack white paper explained.

Azure Stack is obviously going up against the likes of OpenStack, the open source enterprise cloud computing platform that now has the backing of everybody from Rackspace, HP Enterprise and IBM, as well as a thriving startup ecosystem. Microsoft clearly hopes that its hybrid story will allow it to position Azure Stack as a viable alternative against this quickly growing open source competitor.

In many ways, Azure Stack is the logical next step in Microsoft’s overall hybrid cloud strategy. If you’re expecting to regularly move some workloads between your own data center and Azure (or maybe add some capacity in the cloud as needed), having a single platform and only one set of APIs across your own data center and the cloud to work with greatly simplifies the process.

I am still in the process of deploying and reviewing Azure Stack Technical Preview in my lab, but wanted to give everyone an understanding of what Azure Stack is and where it is going.  My review will be coming over the next few weeks…Stay tuned.
IMO… This year will be a significant milestone in helping customers meet their agile development (DevOps) needs while providing the control corporate IT requires by bringing the power of Azure to your on premise environment..

Until next time, Rob.