These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.05 Dec 2017
I fell down the rabbit hole of the latest Facebook version release, trying to understand the deprecation of their User Insights API. The story of the deprecation of the API isn’t told accurately as part of the the regular release process, so I found myself thinking more deeply about how we tell stories (or don’t) around each step forward of our APIs. I have dedicated areas of my API research for the road map, issues, and change log for API operations, because their presence tell a lot about the character of an API, and their usage I feel paints and accurate painting of each moment in time for an API.
Facebook has a dedicated change log for their API platform, as well as an active status and issues pages, but they do not share much about what their road map looks like. They provide a handful of elements with each releases change log:
- New Features — New products or services, including new nodes, edges, and fields.
- Changes — Changes to existing products or services (not including Deprecations).
- Deprecations — Existing products or services that are being removed.
- 90-Day Breaking Changes — Changes and deprecations that will take effect 90 days after the version release date.
The presence, or lack of presence, of a road map, change log, status and issue pages for an API paints a particular picture of a platform in my mind. Also, the stories they tell, or do not tell with each release paint an evolving picture of where a platform is headed, and whether or not we want to participating in the journey. Facebook does better than most platforms I track on when it comes to storytelling, by also releasing a blog post telling the story of each release, providing separate posts for the Graph API, as well as the Marketing API. It is too bad that they omitted the deprecation of the Audience Insight API, which occurred at the time of this story.
While I consider the presence of building blocks like a change log, road map, issues and status page a positive sign for platforms. It still always requires reading between the lines, and staying in tune with each release to really get a feel for how well a platform puts these building blocks to work for the platform. Regardless, I think these building blocks do adequately paint a picture of the current state of a platform, it just usually happens to be the picture that platform wants you to see, not necessary the picture the platform consumers would like to see.
I am increasingly tracking on OpenAPI definitions published to Github by leading API providers I track on. Platforms like Stripe, Box, New York Times are actively managing their OpenAPI definitions using Github, making them well suited for integration into their platform operations, API consumer scenarios, and even within analyst systems like what I have going on as the API Evangelist.
Once I have an authoritative source of an OpenAPI, meaning a public URI for an OpenAPI that is actively being maintained by the API provider, I have a pretty valuable feed into the roadmap, as well as change log for an API. I feel like we are getting to the point where there are enough authoritative OpenAPIs that we can start using as a machine readable notification and narrative tool for helping us stay in tune with one or many APIs across the landscape. Helping us stay in tune with APIs in real-time, and giving APIs an effective tool for communicating out changes to the platform–we just need more OpenAPIs, and some new tooling to emerge.
I’m envisioning an OpenAPI client that regularly polls OpenAPIs and caches them. Anytime there is a change it does a diff, and isolated anything new. Think of an RSS reader, but for OpenAPIs, and going well beyond new entries, and actually creates a narrative based upon the additions and changes. Tell me about the new paths added, and any new headers, parameters, or maybe how the schema has grown. Provide me insights on what has changed, and possibly what has been removed, or will be removed in future editions. As an API analyst, I’d like to be able to have an OpenAPI-enabled approach to receiving push notifications when an API changes, with a short, concise summary about what has change in my inbox, via Twitter, or Github notification.
OpenAPI already provides API discovery features through the documentation it generates, and I’m increasingly using Github to find new APIs after they publish their OpenAPIs to Github, but this type of API discovery and notification at the granular level would be something new. If there was such tooling out there, it would provide yet another incentive for API provides to publish and maintain an active, up to date OpenAPI definition. This is a concept I’d like to also see expanded to the API operational level using APIs.json, where we can receive notifications about changes to documentation, pricing, SDKs, and other critical aspects of API integration, beyond just the surface area of the API. All of this stuff will take many years to unfold, as it has taken over five years for us to reach a critical mass of OpenAPI definitions to emerge, I suspect it will take another five to ten years for robust tooling to emerge at this level, which also depends on many API definitions to be available.
I saw a blog post come across my feeds from the analysis and visualizaiton API provider Qlik, about their Qlik Sense API Insights. It is a pretty interesting approach to trying visualize the change log and road map for an API. I like it because it is an analysis and visualization API provider who has used their own platform to help visualize the evolution of their API.
I find the visualization for Qlik Sense API Insights to be a little busy, and not as interactive as I’d like to see it be, but I like where they are headed. It tries to capture a ton of data, showing the road map and changes across multiple versions of sixteen APIs, something that can’t be easy to wrap your head around, let alone capture in a single visualization. I really like the direction they are going with this, even though it doesn’t fully bring it home for me.
Qlik Sense API Insights is the first approach I’ve seen like this to attempt to try and quantify the API road map and change log–it makes sense that it is something being done by a visualization platform provider. With a little usage and user experience (UX) love I think the concept of analysis, visualizaitons, and hopefully insights around the road map, change log, and even open issues and status could be significantly improved upon. I could see something like this expand and begin to provide an interesting view into the forever changing world of APIs, and keep consumers better informed, and in sync with what is going on.
In a world where many API providers still do not even share a road map or change log I’m always looking for examples of providers going the extra mile to provide more details, especially if they are innovating thike Qlik is with visualizations. I see a lot of conversations about how to version an API, but very few conversations about how to communicate each version of your API. It is something I’d like to keep evangelizing, helping API providers understand they should at least be offering the essentials like a roadmap, issues, change log, and status page, but the possibility for innovation and pushing the conversation forward is within reach too!
I have had a number of requests from folks lately to write more about Github, and how they can use the social coding platform as part of their API operations. As I work with more companies outside of the startup echo chamber on their API strategies I am encountering more groups that aren't Github fluent and could use some help getting started. It has also been a while since I've thought deeply about how API providers should be using Github so it will allow me to craft some fresh content on the subject.
Github As Your Technical Social Network
Think of Github as a more technical version of Facebook, but instead of the social interactions being centered around wall posts, news links, photos, and videos, it is focused on engagement with repositories. A repository is basically a file folder that you can make public or private, and put anything you want into it. While code is the most common thing put into Github repositories, they often contain data file, presentations, and other content, providing a beneficial way to manage many aspects of API operations.
The Github Basics
When putting Github to use as part of your API operations, start small. Get your profile setup, define your organization, and begin using it to manage documentation or other simple areas of your operations--until you get the hang of it. Set aside any pre-conceived notions about Github being about code, and focus on the handful of services it offers to enable your API operations.
- Users - Just like other online services, Github has the notion of a user, where you provide a photo, description, and other relevant details about yourself. Avoid making a user accounts for your API, making sure you show the humans involved in API operations. It does make sense to have a testing, or other generic platform Github users accounts, but make sure your API team each have their own user profile, providing a snapshot of everyone involved.
- Organizations - You can use Github organizations to group your API operations under a single umbrella. Each organization has a name, logo, and description, and then you can add specific users as collaborators, and build your team under a single organization. Start with a single repository for your entire API operations, then you can consider the additional organization to further organize your efforts such as partner programs, or other aspects of internal API operations.
- Repositories - A repository is just a folder. You can create a repository, and replicate (check out) a repository using the Github desktop client, and manage its content locally, and commit changes back to Github whenever you are ready. Repositories are designed for collaborative, version controlled engagements, allowing for many people to work together, while still providing centralized governance and control by the designated gatekeeper for whatever project being managed via a repository--the most common usage is for managing open source software.
- Topics - Recently Github added the ability to label your repositories using what they call topics. Topics are used as part of Github discovery, allowing users to search using common topics, as well as searching for users, organizations, and repositories by keyword. Github Topics is providing another way for developers to find interesting APIs using search, browsing, and Github trends.
- Gists - A GitHub service for managing code snippets that allow them to be embedded in other websites, documentation -- great for use in blog posts, and communication around API operations.
- Pages - Use Github Pages for your project websites. It is the quickest way to stand up a web page to host API documentation, code samples, or the entire portal for your API effort.
- API - Everything on the Github platform is available through the Github API. Making all aspects of your API operations available via an API, which is the way it should be.
Managing API Operations With Github
There are a handful of ways I encourage API providers to consider using Github as part of their operations. I prefer to use Github for all aspects of API operations, but not every organization is ready for that--I encourage you to focus in these areas when you are just getting going:
- Developer Portal - You can use Github Pages to host your API developer portal--I recommend taking a look at my minimum viable API portal definition to see an example of this in action.
- Documentation - Whether as part of the entire portal or just as a single repository, it is common for API providers to publish API documentation to Github. Using solutions like ReDoc, it is easy to make your API documentation look good, while also easily keeping them up to date.
- Code Samples w/ Gists - It is easy to manage all samples for an API using Github Gists, allowing them to be embedded in the documentation, and other communication and storytelling conducted as part of platform operations.
- Software Development Kits (SDK) Repositories - If you are providing complete SDKs for API integrations in a variety of languages you should be using Github to manage their existence, allowing API consumers to fork and integrate as they need, while also staying in tune with changes.
- OpenAPI Management - Publish your APIs.json or OpenAPI definition to Github, allowing the YAML or JSON to be versioned, and managed in a collaborate environment where API consumers can fork and integrate into their own operations.
- Issues - Use Github issues for managing the conversation around integration and operational issues.
- Road Map - Also use Github Issues to help aggregate, collaborate, and evolve the road map for API operations, encouraging consumers to be involved.
- Change Log - When anything on the roadmap is achieved flag it for inclusion in the change log, providing a list of changes to the platform that API consumers can use as a reference.
Github is essential to API operations. There is no requirement for Github users to possess developer skills. Many types of users put Github to use in managing the technical aspects of projects to take advantage of the network effect, as well as the version control and collaboration introduced by the social platform. It's common for non-technical folks to be intimidated by Github, ad developers often encourage this, but in reality, Github is as easy to use as any other social network--it just takes some time to get used to and familiar it.
If you have questions about how to use Github, feel free to reach out. I'm happy to focus on specific uses of Github for API operations in more detail. I have numerous examples of how it can be used, I just need to know where I should be focusing next. Remember, there are no stupid questions. I am an advocate for everyone taking advantage of Github and I fully understand that it can be difficult to understand how it works when you are just getting going.
One of the side effects of the recent bot craze, is that I'm getting to showcase the often very healthy API practices of Slack, as they grow, scale, and manage their developer ecosystem. Slack is beginning to renew my faith that there are API providers out there who give a shit, and aren't just looking to exploit their ecosystems. There are two Slack blog posts that have triggered these thoughts, one on the Slack platform road map, and a little thing about release notes, both of which reflect what I would love to see other API providers emulate in their platform operations.
Slack is going the extra mile to set the right tone in their community, with what I consider to be some of the essential communication building blocks of API operations, but they simply call "keep in touch":
- Recent updates - We improve the Slack platform every day by releasing new features, squashing bugs, and delivering fresh documentation. Here's an account of what's recently happened.
- Support and Discussion - Whether you're working on an app for our App Directory or a custom integration for your team, you'll find yourself in good company, from all of us here at Slack and the wide community of developers working with the platform.
- @SlackAPI - Slack tweets, news, features and tips can be found at @SlackHQ but this? This is all API, all the time.
- Platform Blog - A Medium blog dedicated to the Slack API platform.
- Slack Engineering Blog - A Medium blog dedicated to the Slack engineering team.
- Platform Roadmap - Come, transform teams, and build the future of work with us--About our road map, Explore our roadmap, Review recent platform updates, and Discover what teams want.
- Register As a Developer - Working with the Slack API? Tell us a bit about yourself! We'll use the answers you supply here to notify you of updates to the Slack API, general Slack API news, and to get a better sense of the variety of developers building into Slack.
I just copied and pasted that from their developer portal. Slack does not stop there, also providing an FAQ, a Code of Conduct, and Ideaboard to further set the tone for how things can and should work in a community. What I like about the tone Slack is taking, is that it is balanced--"keep in touch"! Which really is just as much about us API consumers, as it is about Slack. Slack has done the hard work of providing most of the essential API building blocks, as well as a valuable API resource, now its up to the community to deliver--this balance is important, and we should be staying in touch.
Remember the tone Twitter took with us? Developer Rules of the Road!! Very different tone than "keep in touch". The tone really matters, as well as the investment in the common building blocks that enable "keeping in touch", both synchronous, and asynchronously. Having a road map, and change log for your API goes a long way, but telling the behind the why, how, and vision behind your road map and change log--gives me hope that this API thing might actually work.
I am using my minimum viable API operations definition tool to continue profiling the API sector, this time to size up the Slack API community. Slack is kind of a darling of the API space, so it kind of seem silly to profile them, but profiling those who are doing this API think right, is what API Evangelist all about--whether I follow the hype or not.
Using my minimum viable API definition, I went through the Slack API portal looking for what I'd consider to be the essential building blocks that any modern API platform should have.
|Description:||All of our APIs can be used alone or in conjunction with each other to build many different kinds of Slack apps. Whether you're looking to build an official Slack app for your service, or you just want to build a custom integration for your team, we can help you get started!|
|API Base URL:||https://slack.com/api/|
|Pricing:||You should be at least sharing some rate limits, acceptable uses, and other pricing and access related information.|
|Terms of Service:||https://slack.com/terms-of-service/api|
|OpenAPI Spec:||A machine readable OpenAPI Specification for an API is fast becoming an essential element of API operations.|
|API Blueprint:||A machine readable API Blueprint for an API is fast becoming an essential element of API operations.|
|Postman Collection:||A machine readable Postman Colelction for an API is fast becoming an essential element of API operations.|
|Github Org / User:||https://github.com/slackhq|
Performing better than the review of the i.Materialise 3D printing API that I conducted the other day, Slack checks off all but one of the essential building blocks-everything except for pricing. The only other area(s) that I find deficient, is when it comes to machine readable API definitions like OpenAPI Spec and Postman Collections. These aren't required for success, but they can sure go a long ways in helping developers on-board from documentation, to generating code, and tooling that will be needed for integration.
I'm assuming Slack hasn't generated OpenAPI Specs because they have a more XML-RPC design, which I think many folks assume can't be documented in this way. While it doesn't lend itself to more easily being documented with OpenAPI Spec, I found some simple little hacks that make it doable, allowing you to also document even XML-RPC designs. Having some OpenAPI Specs and Postman Collections would make the API more accessible for people looking to play with.
Anyways, I just wanted to test out the minimum viable API operations tool on another API. I am trying to profile several APIs in this way each week, helping the number of APIs I am monitoring grow, while also encouraging other API providers to follow Slack's lead.
I always have an inbox full of requests from companies asking me to take a look at their APIs, and provide any feedback that I can. I do conduct a more formal review for some companies, but I also enjoy looking through the API operations of any API, just as part of my regular monitoring (if I have time). When I do have time, the first part of any scope of review, is to see if it meets my definition of a minimum viable API operation.
This is a definition I've been refining for over five years now, looking through thousands of APIs. Even with all this refinement, I still need a single place I could go, and quickly apply to any of the APIs that are in my review queue. My objective in doing these reviews is partly to help me get to know what an API does, but also provide feedback to the API providers, as well as generate stories that I can share with my readers, helping them polish their own strategy along the way.
To help me streamline the reviews I do, and deliver feedback to API providers, I created a little micro tool for my self, that I can use as a checklist while I go through the operations of each API in my queue. To beta test my minimum viable API operation definition tool, I profiled the 3D printing API i.Materialise.
|Description:||i.materialise has developed several interfaces (APIs) that allows your business to connect with our systems. Integrating apps or websites with i.materialise has never been easier. Feed your data to us, receive all possible order information and let our +100 3D printers do the rest!|
|API Base URL:||https://i.materialise.com/web-api/|
|Code:||Code samples, libraries, and SDKs help reduce friction when on boarding for API consumers.|
|Road Map:||A road map shared with the community will keep consumers in sync with platform operations, giving them time to prepare, and possibly provide feedback that can be considered.
|Change Log:||A publicly available change log shared with the community will keep consumers aware of what has happened, and reduce the resources needed to support.|
|Terms of Service:||https://i.materialise.com/legal/terms|
|OpenAPI Spec:||A machine readable OpenAPI Specification for an API is fast becoming an essential element of API operations.|
|API Blueprint:||A machine readable API Blueprint for an API is fast becoming an essential element of API operations.|
|Postman Collection:||A machine readable Postman Collection for an API is fast becoming an essential element of API operations.|
|Github Org / User:||https://github.com/imaterialise|
|Support Page:||Pulling together all your support items into a single, easy to find page can help reduce frustration within your API community. Nobody likes to have to hunt down ways to get support, put it in a single page.
|Contact Name:||A dedicated person, who can be responsible for an API is a pretty fundamental piece of API operations--don't hide.
|Contact Email:||A dedicated email address for an API is a pretty fundamental piece of API operations--don't hide.|
The only areas the i.Materialise API stumbles for me is having a road map, change log, as well as a dedicated support page, with a contact person and email to reach out to. You can go to the public side of the i.Materialise site and use the contact page, which is linked in the footer of the developer portal, but I strongly recommend having a dedicated support page, and channels that are dedicated to the API community.
The blog, RSS, and Twitter feeds are not dedicated to the API, and are company-wide. This is fine, but at some point I recommend a dedicated blog, RSS, and Twitter accounts for the API. It can be easy to burn out API consumers with too much general information that doesn't apply to them, and there is little overhead involved with deploying a blog, RSS, and Twitter accounts dedicated to the community.
iMaterialise is closer to being a complete API definition, than most APIs that I look at--something that won't take much effort to bring up to speed. The area I do recommend they focus energy on is around the availability of an OpenAPI Spec, API Blueprint, and Postman Collections for the platform. These elements would significantly add to the available documentation for the platform, while also allowing them to easily generate SDKs for the platform using APIMATIC, which is another deficient area for the API (no code page). In addition to better docs and SDKs, these API definitions would allow any developers to quick load up, and begin playing with the API in popular HTTP clients like Postman, and DHC.
The lack of an OpenAPI Spec is the most deficient area in my opinion. The availability of a definition, would push the presence of the iMaterialise 3D Printing API beyond the minimum viable API operation definition for me, and into the zone of a robust platform. One that will go along way towards attracting new developers, on boarding them quicker, and helping them go from discovery to successful integration -- which is what this is all about.
Next, if I have the time, I will create an OpenAPI Spec for the platform which will give me more awareness around the actual API design.
It always makes me smile, when I talk to someone about one or many areas of my API research, sharing how I conduct my work, and they are surprised to find how many areas I track on. My home page has always been a doorway to my research, and I try to keep this front door as open as possible, providing easy access to my more mature areas like API management, all the way to my newer areas like how bots are using APIs.
From time to time, I like to publish my API life cycle research to an individual blog post, which I guess puts my home page, the doorway to my research into my readers Twitter stream, and feed reader. Here is a list of my current research for April 2016, from design to deprecation.
I am constantly working to improve my research, organizing more of the organizations who are doing interesting things, the tooling that is being deployed, and relevant news from across the area. I use all this research, to fuel my analysis, and drive my lists of common building blocks, which I include in my guides, blueprints, and other white papers and tutorials that I produce.
I am currently reworking all the PDF guides for each research area, updating the content, as well as the layout to use my newer minimalist guide format. As each one comes off the assembly line, I will add to its respective research area, and publish an icon + link on the home page of API Evangelist--so check back regularly. If there is any specific area you'd like to see get more attention, always let me know, and I'll see what I can do.
It is pretty easy to design, define, and deploy APIs these days, and I get a number of folks who approach me with questions about how to get going with the operations and management side of things. While each company, and API provider, will have different needs, I have a general list of the common building blocks used by the leading API providers I track on across the API sector.
So that I have an up to date URL to share with a couple of my partners in crime, I wanted to organize some of the common building blocks across my almost 50 areas of API research, into a single list, that can be considered when anyone is planning on deploying an API. For this guide, I wanted to touch on some of the building blocks you should consider as part of your central API developer portal, documentation, and other elements of the management and operations, what I feel should be a minimum viable presence for successful API providers.
Taking API Inventory
Taking inventory of what web services, and APIs may already exist, be in use, or are available within an organization, providing a master catalog of current resources, that can be put to use, and evolved.
- Internal APIs - What existing APIs are in operation and use by internal groups?
- Public APIs - What public APIs are currently available for use?
What is the process for on-boarding of new users? Walk through what a new user will experience, looking at each step from landing on home page, to having what I need to make my own API call. Reduce as much friction as I can, and making on-boarding as fast as possible.
- Portal - Is the portal publicly available, or just a centrally portal on a private network?
- Getting Started - Does this API have a getting started guide applied to its operations?
- Self-Service Registration - Is this API available for self-service registration?
- Sign Up Email - Do API consumers receive an email upon signup for an account?
- Best Practices - Does this API have a best practices page applied to its operations?
- FAQ - Does this API have a frequently asked questions (FAQ) page applied to its operations?
- Google Authentication - Is Google Authentication available for platform signup and login?
- Github Authentication - Is Github Authentication available for platform signup and login?
- Facebook Authentication - Is Facebook Authentication available for platform signup and login?
The on-boarding experience has to have a little friction as possible, and feel like what API consumers are already used to when they are putting other leading API platforms to use. Do not re-invent the wheel, or introduce obstacles into the API on-boarding process.
What is provided when it comes to documentation for the platform? There are a number of proven building blocks available when it comes to API documentation, providing the technical details of what an API can do.
- List of Endpoints - Is there simple list of endpoints available?
- Static Documentation
- Is there documentation for the API?
- Is Slate used for API documentation?
- Interactive Documentation
- Error Response Codes - Are error response codes and detail documented anywhere?
- Crowd Sourced Updates - Does the platform allow the community to edit, and submit changes to documentation using Github, or other mechanism?
- Notifications - Are there notifications that are sent out as part of any change that is made to documentation?
Are there small, simple usable samples in a variety of programming languages, and potentially for a variety of platforms, demonstrating each API call available via a platform.
- PHP - Is there PHP samples for each endpoint?
- Python - Is there Python samples for each endpoint?
- Ruby - Is there Ruby samples for each endpoint?
- Node.js - Is there Node.js samples for each endpoint?
- C Sharp - Is there C Sharp samples for each endpoint?
- Java - Is there Java samples for each endpoint?
- Go - Is there Go samples for each endpoint?
- Scala - Is there Scala samples for each endpoint?
Generally samples will have minimal authentication elements, and reduce any external dependencies, focusing in on a specific API endpoint call, in a particular programming language.
What SDKs are available? These SDKs might be hand crafted, or auto generated, but should be available in a variety of languages, encouraging the jumpstarting of integrations by a wide as possible audience.
- PHP - Is there a PHP SDK for the API?
- Python - Is there a Python SDK for the API?
- Ruby - Is there a Ruby SDK for the API?
- Node.js - Is there a Node.js SDK for the API?
- C Sharp - Is there a C Sharp SDK for the API?
- Java - Is there a Java SDK for the API?
- Go - Is there a Go SDK for the API?
- Scala - Is there a Scala SDK for the API?
It is getting more common for API providers to use an SDK generation service, using the machine readable definitions of APIs as the contract to follow. Even with the overhead of SDK generation, development, and support, they are still widely used to help speed up application and system integration.
There are many overlaps with mobile in the regular SDK portion of this research, but some providers are publishing more resources specifically dedicated to the support of mobile integrations.
- Mobile Overview - Is there a page dedicated to the platforms mobile integration resources?
- iOS SDK - Is there an IOS SDK?
- Android SDK - Is there an Android SDK?
Not all platforms will need to support mobile integrations, but the growing number of APIs being deployed, are deploy to support mobile efforts. There are a number of other considerations, but these three areas represent the minimum viable considerations.
What resources are available for managing code across the platform. This are focuses on just the services, tooling, and process associated with code management, not always the code itself.
- Code Page - Is there a page in the portal dedicate to the code available for a platform?
- Github - Is Github used to manage code that is part of API operations?
- Application Gallery - Is there an application gallery available for applications that are built on top of the API?
- Open Source - Are there open source code, and applications available as part of API operations?
- Community Supported Libraries - Does the platform accept and list community supported libraries?
Github should play a central role in the code management of any modern API platform. Much like Twitter, Facebook, and LinkedIn will play important role in your communication and support efforts, Github is key to the management of code resources at all levels of operations.
What support services are available 24/7, that developers can take advantage of without requiring the direct assistance of platform operators.
- Forum - Is there a forum available that provides self service support options?
- Forum RSS - Does the forum have an RSS feed?
- Stack Overflow - Is Stack Overflow used as part of the support strategy for the platform?
- Knowledge base - What sort of content directory and knowledge base is available to search and browse?
Self-service support is always present in the successful API platforms. Like the web, APIs are a 24/7 operations, and if developers cannot get direct support 24/7, there should be a wealth of self-service items available.
What support services are available that developers can take advantage of, that involves direct employee attention. Even though APIs should be self-service as makes sense, direct support will always play an important role in setting the tone for the community.
- Email - Is there an email for API consumers to receive direct support?
- Contact Form - Is there an contact form for API consumers to receive direct support?
- Phone - Is there a phone number available for API consumers to receive direct support?
- Ticket System - Is there a ticketing system available for API consumers to receive direct support?
- Social - Is community support also offered via existing social network profiles and channels?
- Office Hours - Are office hours available, and posted for API consumers to take advantage of?
- Calendar - Is there a calendar of events for office hours, and other support related events?
- Paid Support Plans - Are there paid support plan options available for the platform?
APIs are a business, and you have to provide support. Many savvy API consumers will browse the blog, Twitter account, and other support channels looking for the right amount of activity, and assistance present -- if its not there, they'll move on.
The Road Map
How are we planning, and communicating updates to the platform? Providing a map of how things have changed across the platform, from versioning of the API itself, to even documentation, and other aspects of platform operations.
- Road Map - Is there a road map shared with API consumers?
- Idea Submission - Can API consumers and partners submit ideas for inclusion in the road map?
A road map plays a critical role as sort of value or joint where platform provider and platform consumer engage. It pushed the provider to consider ideas from the community, bringing the platform closer in alignment with consumers, and goes long way in building trust with the community.
What is currently happening on an API platform, providing a real time heart beat of the current status of API resources. There are a handful of common elements platforms use to stay in tune with their platform operations, while also sharing with the community.
- Status Dashboard - Is there a status dashboard available to API consumers?
- Status RSS - Does the status dashboard have an RSS feed?
- Status History - Is status history archived, and available for review alongside the current status?
When done right, platform status shared with the community, can send the important signal on a regular basis, that all is well on a platform. Something that will be echoed across the platform, and social web, eventually reaching others who might eventually be new consumers.
What has already happened with a platform, providing a single archive of all changes made to the platform, for consumers to review at any time, in an easy to find location.
- Change Log - Is there a change log available for API consumers to review, to better understand what changes have been made?
- RSS Feed - Is there an RSS feed for the platform change log, allowing users to subscribe to changes as they are made?
- Notifications - Notifications about changes to the road map, and status of overall operations that will impact API consumers.
- Emails - Are email notifications sent to API consumers where there is a change in the roadmap or status of API platform?
An active change log is one of the clear signs that a platform is active, and something you can depend on. The record that exists across a platforms road map, status, and change log will help set the tone for an API community. A platform where these elements are missing, or has big gaps in information, or out of sync, are all signs of other wider illnesses that may exist across platform operations.
What are the communication elements available as part of the overall feedback loop for an API platform. There should be at least a minimum viable communications present, otherwise it is unlikely anyone will learn that a platform exists.
- Blog - Is there a blog for API communications?
- Blog RSS Feed - Does the blog have an RSS feed?
- Twitter - Is there a Twitter account for API communications?
- Email - Is there a email account for API communications?
- LinkedIn - Is there a LinkedIn account for API communications?
- Slack - Is there a Slack channel for API communications?
- Email Newsletter - Is there an email newsletter dedicated to API communications?
Like support, road map, status, and change logs, an active and informative communication strategy will set the tone of an API community, and build trust amongst consumers. Also providing clear signals of when a platform is healthy, or one that should be avoided.
What other resources are available for API consumers to take advantage of? Common resources provide a wealth of usually self-service knowledge resources that API consumers can consume on demand, as part of their API integration journey.
- Case Studies - Are there case studies available showcasing how APIs can be put to use?
- How-to Guides - Are there how to guides assisting consumers in understanding how to integrate with an API?
- Webinars - Are webinars conducted, introducing consumers to platform operations?
- Videos - Are there videos available to assist consumers in understanding what a platform does, and how to integrate with it?
API consumers will learn in different ways. Not all will need how to guides, and videos, but many users will prefer. Make sure and provide a wealth of up to date, and informative resources.
Consumers of an API platform always need an account where they can get access to API authentication, usage reports, and other common elements of API operations. What does the developer account, or area look like, and what resources are available for developers to take advantage of.
- Developer Dashboard - Is there a dashboard for API consumers?
- Account Settings - Can API consumers manage their account settings?
- Reset Password - Can API consumers reset their passwords to their account?
- Application Manager - Can API consumers manage the applications setup to integrate with API?
- Usage Logs & Analytics - Can API consumers access logs and analytics for their API consumption?
- Billing History - Can API consumers see billing history for their accounts?
- Message Center - Is there a messaging center for API consumers to communicate with the platform, and receive notifications?
- Delete Account - Can API consumers delete their account?
- Service Tier Management - Can API consumers change / update the tier of service their account exists in?
Like mentioned in on-boarding, make sure the developer account acts, and feels like other modern SaaS, and online accounts. Don't make it difficult for API consumers to manage their profile and account on an API platform. There are a wealth of healthy examples of how to do this right, across the API landscape.
It may seem silly, but what APIs are available for managing API management related elements? API consumers are increasingly needing programmatic control over all aspects of their API accounts, as the number of API used increases. There are a number of API platforms that provide API management APIs, something that is easy to do with modern API management infrastructure.
- User Management - Is there an API for managing users who have access to any API?
- Account Management - Is there an API for managing account level information?
- Application Management - Is there an API for managing applications that have access to any API?
- Service Management - Is there an API for accessing service level details for available APIs?
Remember, you may be one of multiple APIs that API consumers are using to drive their web, mobile, and device applications, as well as systems integrations. Allow for the automation of all aspects of their accounts, user details, applications, and service management.
When it comes to API operations, what is needed to reach an international audience? There are number of building blocks emerge that are being used by leading platforms to make sure their properly internationalized for a global audience.
- Language - Are there multiple language versions of the portal available?
APIs are a global resources, and are increasingly being deployed to support multiple regions around the world. Even if internationalization is a down the road concern, take a moment to understand how far down the road it is.
Authentication is central to many other lines of the API life cycle. There are several common elements present in modern API solutions that address authentication.
- Overview - Is there an authentication overview available?
- Basic Auth - Does the platform employ basic authentication for accessing API resources?
- Key Access - Does the platform require API keys for accessing API resources?
- JSON Web Token - Does the platform require JSON Web Tokens for accessing API resources?
- oAuth - Does the platform require OAuth for accessing API resources?
- Tester - Is there an authentication tester available?
- Scopes - If OAuth is employed, is there a page dedicated to sharing OAuth scopes.
- Two Factor Authentication - Is two factor authentication available for the platform?
While not the perfect identity and access management stack, there are plenty of proven approaches to handling API authentication. Carefully consider how much authentication is necessary, based upon what resources are made available, and the expectations around API integration. Do not over do authentication when not necessary, but also make sure and don't under invest in this area, as it will bit you in he ass down the road.
The details of security an API platform. Since web APIs often use the same infrastructure as websites and applications, some of the approaches to security can be shared.
- Security Practices Page - Is there a page dedicated to providing an overview, and some times detail of security practices?
- Security Contact - Is a security contact published as part of platform operations?
Share as much detail as possible about what is being done to mitigate threats at all layers of your stack. This should be just as much an admission that you know what you are doing, as it is an important details for API consumers around security. When security details are missing from an API platform's presence, I find it is more because this area wasn't considered, more than anything else.
The lawyers are driving and guiding almost all value being generated, captured, and done via the growing number of online services depend on. Any savvy API consumer will be looking to understand the legal requirements that surround integration, so make sure there are a handful of building blocks present.
- Terms of Service - A terms of service that guides platform operations, and developer integrations.
- Licensing - What the licensing considerations are for all data, content, server & client code, as well as APIs.
- Branding - The branding requirements, guides, and other assets to be considered as part of company branding.
The best API providers have a legal department that is more human than lawyer, speaking plain english over legalize. A simple, comprehensive, and understandable legal department is a sign of a healthy platform, with nothing to hide from its consumers.
What are the units of currency the platform uses. What are the individual value units applied to each API, and how are things calculated. Most like this is done in dollars, or euros, but other units are emerging as well.
- Value - What is the direct value associated with an API?
- Usage - What direct value does API usage deliver?
- Volume - How does volume usage of an API deliver value?
- Limits - How is value maintained by imposing limitations?
- Users - How does having more users generate value?
- Applications - How does having more application generate value?
- Integrations - How can more integrations with other systems generate value?
You would be surprise how many existing API platforms I speak with who cannot answer many of these questions. They feel their APIs are valuable, but cannot articulate the value they bring to their consumers in a coherent way. Understanding the direct value an API generates is something that should be discussed as part of every API deployment, and shared with API consumers, and other key stakeholders in a coherent way.
Beyond the obvious, APIs are generating a lot of value for platform providers, and consumers. What are some of the common ways to look at indirect value generation.
- Marketing Vehicle - How are APIs used as a marketing vehicle for an organization, products or services?
- Traffic Generation - How is an API used for generating traffic to other websites, mobile applications, or devices?
- Brand Awareness - How is an API used for increasing brand awareness of an organizations, and its products or services?
- Data & Content Acquisition - How does the acquisition of data or content via an API generate value?
- Syndication - How does the API generate value through the syndication of data, content, and other digital resources?
There are numerous APIs that restrict any indirect value from occurring through tightening down on other aspects of API operations. It takes a savvy API provider to be in tune with the indirect value generated via an API, and see the big picture of what is possible with an API presence.
These are the key elements of API plans that I have gathered from across hundreds of API providers. These elements can be associated with specific plans that are available, but they do not have to, and I often use them to generally describe the plans, or perceived plans behind API operations. These are the elements you should be considering as part of your own plans. You do not have to use all of them, but hopefully they will help you better understand the possibilities when it comes to API planning.
- Overview - Is there a page dedicated to providing of all the plans available via the API platform?
- Private - Are there private APIs available via the platform?
- Internal - Are APIs available via the platform used internally?
- Partner - Are APIs available via the platform used by partners?
- Public - Are APIs available via the platform availably publicly?
- Free - Are there free API access via the platform?
- Commercial - Is there commercial usage of API resources?
- Non-Commercial - Is there non-commercial usage of API resources?
- Educational - Is there educational access to API resources?
Using HTTP as the transport for your API does not mean it is a public API by default, but there are a number of technical, business, and political elements to be considered when planning the internal, partner, and public access to API resources. Have a plan, share the plan, and use it to guide platform discussions.
Beyond the overall access considerations, what are the specific metrics that are being applied to overall API operations, as well as individual plans and access tiers. Depending on the resource, there are a number of metrics being used across the API space, by leading API providers. This layer of the journey is meant to walk through the metrics you will want to consider in your API journey, allowing to cherry pick the ones that are most import to you. Not all metrics apply in all situations, but they are the building blocks of good API plans.
- Access - Is access (or not access) used as a metric in monetization, or can you buy access to some API resources?
- Calls - Are API plans metered by individual API call?
- Transaction - Are API plans measured by overall transactions completed?
- Message - Are API plans measured by number of messages sent?
- Compute - Are API plans metered by the amount of compute resources available?
- Storage - Are API plans metered by the amount of storage used?
- Bandwidth - Are API plans metered by the amount of bandwidth used?
Metrics are often rooted in what the hard costs are with deploying, managing, and operating an API. Once they are well defined, and you get more in tune with platform operations, and what value is being generated, and what operational costs are, you will begin to see things in new ways. Think about what Amazon Web Services has done with APIs, pushing the concept of how we measure the access of valuable digital resources.
What are limitations and constraints applied as part of the API planning operations. How are these crafted, applied, and reported upon. All APIs will have limitations. Even with wealth of scalable tooling we available today, there are still a handful of areas where limitations are being applied to keep platforms healthy and stable.
- Overview - Is there a page dedicated to helping understand API limits in place?
- Range - Are API rate limits based upon limits of metrics applied to API resources?
- Resources - Are API rate limits applied to individual API resources?
- Unlimited - Are there places where there are no limits applied?
- Increased - Can rate limits be increased?
- Inline - Are API rate limits available inline for each API in the documentation?
The primary reasons for setting limitations is to keep API resources available to the entire community, helping ensure stability, and keep operational costs within reasonable realms. However, limitations are also used for business, and political goals, going well beyond the common technical restrictions in place.
The consumption of API resources is often measured within timeframes, in addition to the wide number of other metrics that can be applied. Having meaningful timeframes defined for evaluating how APIs are consumed, and using as part of overall planning, when it comes to all aspects, ranging from rate limits to billing.
- Seconds - Are elements of plans metered in seconds?
- Minutes - Are elements of plans metered in minutes?
- Hourly - Are elements of plans metered by the hour?
- Daily - Are elements of plans metered daily?
- Monthly - Are elements of plans metered monthly?
- Annually - Are elements of plans metered annually?
At first these seem like they shouldn't be included in the minimum viable presence for API operations, but in reality, these timeframes are core to everything we do. We limit API calls by the second, minute, and hour, and we often clear limitations each day or monthly, as we bill for usage or just allow the amount of consumption we can afford as platform providers.
The communication around partner levels of access is critical to overall health and balance with other tiers of access. Providing as much detail for partners, but also potentially other levels of access is important. Here are a few of the building blocks employed to help manage partner details.
- Landing Page - Is there a landing page dedicated to the partner program?
- Program Details - Are the program details available via a landing page, as well in a portable, shareable format(s).
- Program Requirements - What are the requirements to be part of the partner program?
- Program Levels - What are all the levels of the partner program, and what are the details?
- Application - Is there a partner program application available for prospects to fill out?
- List of Partners - Will there be a list of partners available for other partners and consumers to view?
How Are APIs Found?
How APIs are being discovered across the current API landscape. How are APIs being found by developers, and application architects at all stages of development.
- API Directory - What API directories are in use?
- APIs.json - Is APIs.json in use to provide meta data indexes for API discovery?
This provides one possible base for the average API operations. Granted, not every element here should be implemented by all APIs, but it does provide a healthy checklist that can be considered as part of any APIs life cycle. I'm sharing this so that my partners may consider as part of their own operations, and to use as a draft for a future white paper, that any company can use as a guide in their own API journey.
My goal in assembling this information is to help shape what the portal, and potentially online API presence might be for an API. I also want to provide a nice checklist that anyone can just run down, making sure any important element was not considered. It is easy to miss things, while you are down in the weeds making an API happen. This is why I'm here, to help keep an eye on the bigger picture, and provide you with what you need to be as successful as you can in your own API efforts.
I find it very tough to provide just enough information to people, without going into areas of the API life cycle that do not apply. This guide is meant to address what is needed as you prepare to launch a new API, but could also be used by API providers, with existing APIs and portal, but are looking to consider what might be next for a road map. You can find all of these elements as part of my overall research into the API space, as well as additional areas, and building blocks that didn't quite fit into this particular perspective.
I am working on several very rewarding API efforts lately, but one I'm particularly psyched about is Open Referral. I'm working with them to help apply the open API format in a handful of implementations, but to also share some insight on what the platform could be in the future. I have been working to carve out the time for about it, and finally managed to do so this week, resulting in what I am hoping will be some rewarding API work.
As i do, I wanted to explore the project, work to understand all the moving parts, as well as what is needed for the future, using my blog. I am not recommending that Open Referral tackle of all this work right now, I am just trying to pull together a framework to think about some of the short, and long terms areas we can invest in together. I intend to continue working with Greg, and the Open Referral team to help spread the awareness of the open API specification, and help build the community.
Open Referral is all about being an open specification, dedicated to helping humans find services, and even more humans to help other humans to find the services they need--I can't think of a more worthy implementation of an API. In my opinion, this is what APIs are all about -- providing open access to information, while also allowing for commercial activity. To help prime the pump, let's take a look at the specification, and think more about where I can help, when it comes to the Open Referral organization and eventually, the Open Referral platform.
Human Services Data Specification (HSDS)
"The Human Services Data Specification (HSDS) defines content that provides the minimum set of data for Information and Referral (I&R) applications as well as specialized service directory applications." Which represents a pretty huge opportunity to help deliver vital information around public services, to those who need them, where they need them, using an open API approach.
Currently there is an existing definition for HSDS available on Github, but I'd like to see the presence of HSDS elevated, showcasing it independently of any single implementation of the API, or the web, and mobile applications that are built on top of it. It is important that new people, who are just learning about HSDS understand that it is a format, and independent of any single instance. Here is a break down of the HSDS presence I'd like to see.
- Website - Establish a simple, dedicated website for just the specification.
- Twitter - Establish a dedicated Twitter account for the specification.
- Github Repo - Can repo be moved under Open Referral Github?
- Partners - Link to the Open Referral partner network.
- Road Map - What is the road map for the specification?
- Change Log - What is the change log for the specification?
- Licensed - CC0 License
I want to help make sure HSDS is highly available as an OpenAPI Specification, as well as the API Blueprint format. Both of these formats will enable anyone looking to put HSDS to work, to use the definition as a central reference for their API implementation, that can drive API documentation, code samples, testing, and much more.
I do not know about you, but having an open standard for finding and managing open data about human services, that can be used across cities, regions, and countries, seems like a pretty vital API design pattern--one that could make a significant impact in people's lives. When you are talking about helping folks find food and health services, making sure the disparate systems all speak the language matters, and could be the difference between life and death, or at least just make your life suck a little less.
While Open Referral, and HSDS was born out of Code For America, there is an organization in place, to use as a base for evolving the format, and building a community of implementations around the important specification. I wanted to take some time and organize some of the existing moving parts of the Open Referral organization, while also exporting what elements that I feel be needed to help evolve it into a platform.
The Open Referral Organization
As I mentioned, there is an organization already setup to guide the effort, "the Open Referral initiative is developing common formats and open platforms for the sharing of community resource directory data — i.e., information about the health, human and social services that are available to people in need." -- You can count me in, helping with that. Right up my alley.
Right now Open Referral is a nice website, with some valuable information about where things are today. The "common formats" portion of that vision is in place, but how do help scale Open Referral toward being an open platform, while also enabling others to also deploy their own open platform, in support of their own human services project(s). Some of these projects will be open civic projects by government and non-governmental agencies, while some will be commercial efforts -- both approaches are acceptable when it comes to Open Referral, and HSDS.
Let's explore what is currently available for the Open Referral organization, and what is needed to help evolve it towards being a platform enabler. Here is what I have outlined so far:
There is already a basic web presence for the organization, it just needs a little help to look as modern as it possibly can, and assume the lead role in getting folks aware, and involved with Open Referral and HSDS as possible.
- Website - Having a simple, modern web presence for the Open Referral organization.
OpenReferral.org is the tip of the platform, but if we want to increase the reach of the organization, and take the conversation to where people already exist, we'll need to think more multi-channel when it comes to the organizational presence.
There is already a great presence in place, an active blog, Twitter, and Google Group. Based upon the approach of other open formats, and software efforts, there are a number of other platforms we should be looking to spread the Open Referral presence to.
- Twitter - Managing an active, human presence on Twitter.
- LinkedIn - Managing an active, human presence on LinkedIn.
- Facebook - Managing an active, human presence on Facebook.
- Blog- Having an active, informative, blog available.
- Blog RSS - Providing a machine readable feed from blog.
- Medium - Publishing regularly to Medium as well as blog.
- Google Group - Maintaining community and discussion on Google Groups.
- Newsletter - Provide a dedicated partner newsletter.
So far we are just talking about marketing, and social media basics for any organization. We will need to make sure the overall organizational presence for Open Referral dovetails seamlessly with the more technical side of things, establishing a very non-developer friendly, yet still a little more technical, developer, and IT focused audience.
Open Referral Developer Portal
I suggest following the lead of other successful open standard, and software efforts, and establish a dedicated portal for the platform at http://developer.openreferral.org. This central portal will not provide access to a working implementation of the API, but focus instead on the community resources it will take to help ensure the widespread adoption of HSDS.
Right now, there is only the Ohana API, and supporting client tools that have been developer by Code for America. This is a great start, but Open Referral needs to evolve, making sure there are a wealth of language and platform formats available for supporting any implementation. I went to town thinking through what is possible with the Open Referral developer portal, based upon other open API, specification, and software platforms I have studied. Not everything here is required to get started, with a minimum viable developer portal, but provides some food for thought around what could be.
- Landing Page - A simple, distilled representation of everything available.
- HSDS Specification - Link to separate site dedicated to the specification.
- Github - The organizational organization as umbrella for presence.
- Server Implementations (PHP, Python, Ruby, Node, C#, Java)
- Server Images (Amazon, Docker, Heroku Deploy)
- Database Implementations (MySQL, PostgreSQL, MongoDB)
- Client Samples (PHP, Python, Ruby, Node, C#, Java)
- Client SDKs (PHP, Python, Ruby, Node, C#, Java)
- White Label Apps
- Platform Development Kits
- WordPress (PHP)
- Spreadsheet Connector(s) (Google, Excel)
- Database Connector(s) (MySQL, SQL Server, PostgreSQL)
- Widgets (ie. Search, Featured)
- Buttons (ie. Bookmarklet, Share)
- Visualizations (ie. Graphs, Charts)
- Email - The email channels in which the organization provides.
- Github Issues - Setup for platform, and aggregate across code projects.
- Google Group - Setup specific threads dedicated to the developers.
- Legal - The legal department for the Open Referral organization and platform.
- Terms of Service - What are the terms of service set by the Open Referral organization.
- Licensing (Data, Code, Content) - What licensing is applied to content, data, and code resources.
- Branding - What are the branding guidelines and assetts available for the Open Referral platform.
The Open Referral developer portal really is just a project website which organizes links, and meta information about any valuable code that is developed, that uses HSDS as its core. The ultimate goal is to provide a rich marketplace of server, client-side, platform, and language resources that can be applied anywhere. Some of it will be officially platform support, while other will be partner and Open Referral community supported. The central portal is purely to help organize all the valuable resources that are generated from the community, and easy to find by the community.
Open Referral Demo Portal
I have assembled this outline, based upon the portal presence of leading API platforms like Twitter, Twilio, and Stripe. As with every other area, not all these elements will be in the first iteration of the Open Referral demo portal, but we should consider what the essentials should be in a minimum viable definition for an Open Referral demo portal.
- Landing Page - A simple, distilled down version of portal into a single page.
- Getting Started
- Overview - What is the least possible information we need to get going.
- Registration / Login - Where do we signup or login for access.
- Signup Email - Providing a simple email when signing up for access.
- FAQ - What are the most commonly asked questions, easily available.
- Overview - Provide an overview of how to authenticate.
- Keys - What is involved in adding an app, and getting keys.
- OAuth Overview - Provide an overview of OAuth implementation.
- OAuth Tools - Tools for testing, and generating OAuth tokens.
- Interactive (Swagger UI) - Providing interactive documentation using Swagger UI.
- Static (Slate) - Providing more static, attractive version of documentation in Slate.
- Schemas (JSON) - Defining all underlying data models, and providing as JSON Schema.
- Pagination - Overview of how pagination is handled across API calls.
- Error Codes - A short, concise list of available error codes for API responses.
- Samples (PHP, Python, Ruby, Node, C#, Java) - Simple code samples in variety of languages.
- SDKs (PHP, Python, Ruby, Node, C#, Java) - More complete SDKs, with authentication in variety of languages.
- Widgets (ie. Search, Featured) - Simple, embeddable widgets that make public or authenticated API calls.
- Buttons (ie. Bookmarklet, Share) - Simple browser, web, or mobile buttons for interacting with APIs.
- Visualizations (ie. Graphs, Charts) - Provide a base set of D3.js or other visualizations for engaging with platform.
- Outbound - Allow for outbound webhook destinations and payload be defined.
- Inbound - Allow for inbound webhook receipt and payload be defined.
- Analytics - Offer analytics for outbound, and inbound webhook activity.
- Alerts - Provide alerts for when webhooks are triggered.
- Logging - Offer access to log files generated as part of webhook activity.
- Limits - What are the limits involved with accessing the APIs.
- Pricing - At what point does API access become commercial.
- Road Map - Providing a simple road map of future changes coming for the platform.
- Issues - A list of current issues that are known, and being addressed as part of operations.
- Change Log - Providing a simple accounting of the changes that have occurred via the platform.
- Status - A real time status dashboard, with RSS feed, as well as historical data when possible.
- Github Issues - Provide platform support using Github issues, allowing for public support.
- Email - Provide an email account dedicated to supporting the platform.
- Phone- Provide an phone number (if available) for support purposes.
- Ticket System - Providing a more formal ticketing system like ZenDesk for handling support.
- Blog w/ RSS - Providing a basic blog for sharing stories around the platform operations.
- Slack - Offering a slack channel dedicated to the platform operations.
- Developer Account
- Dashboard - An overview dashboard providing a snapshot of platform usage for consumers.
- Account Settings - The ability to manage settings and configuration for platform.
- Application / Keys - A system for adding, updating, and remove application and keys for API.
- Usage / Analytics - Simple visualizations that help consumers understand their platform usage.
- Messaging - A basic, private messaging system for use between API provider and consumer(s).
- Forgot Password - Offering the ability to recover and reset account password.
- Delete Account - Allow API consumers to delete their API accounts.
- Terms of Service - A general, open source terms of service that can be applied.
- Licensing (Data, Code, Content) - Licensing for the data, code, and content available via the platform.
- APIs.json - Providing a machine readable API.json index for the API implementation.
- APIs.io - Registering of the API with the APIs.io search engine via their API.
This base portal design will act as a demo implementation, with an actual functional API operating behind it. It could also be potentially forked, and used in other Open Referral API implementations, as a forkable base, that can be customized, and built on for each individual deployment. Github, using Github Pages, along with Jekyll pages allows for the easy design, development, and then forkability of an open portal blueprint. I'd like to see all the project sites that support the Open Referral effort operate in this similar fashion, which isn't unique to Github, and can run on Amazon S3, Dropbox, and almost any other hosting environment.
One of the strengths of the Open Referral organization, and is essential to evolve into a platform is the availability of a formal partner program to help manage a variety of different partners who will be contributing in different ways. I suggest operating a site dedicated to the Open Referral partner program, located at the sub domain http://partner.openreferral.org. This provides a clear location to visit to see who is helping building out the Open Referral platform, and get involved when it makes sense.
- Overview - An overview of the Open Referral partner program.
- Gallery of Partners - Who are the Open Referral Partners.
- Gallery of Applications - What are the Open Referral implementations.
- Partner Stories - What are the stories being the partner implementations.
- Types - The types of partners involved with platform.
- Application - The partners who are just deploying single web, or mobile applications.
- Integration - The partners who are just deploying single API, portals.
- Platform - The partners who are implementing many server, and app integration.
- Investor - Someone who is investing in Open Referral and / or specific implementations.
- Registration / Form - A registration form for partners to submit and join the program.
- Marketing Activities
- Blog Posts - Provide blog posts for partners to take advantage of one time or recurring.
- Press Release - Provide press releases for new partners, and possibly recurring for other milestones.
- Discounts - Provide discounts on direct support for partners.
- Office Hours - Provide virtual open office hours just for partners.
- Training - Offer direct training opportunities that is designed just for partners.
- Advisors - Provide special advisors that are there to support partners.
- Quotes - Allow partners to provide quotes that can be published to relevant properties.
- Testimonials - Have partners provide testimonials that get published to relevant sites.
- Use of Logo - Allow partners to use the platform logo, or special partner platform logo.
- Blog - Have a blog that is dedicated to providing information for the partner program.
- Spotlight - Have a special section to spotlight on partners.
- Newsletter - Provide a dedicated partner newsletter.
Formalizing the partner program for Open Referral will help in organizing for operation, but also provide a public face to the program, lending credibility to the platform, as well as to its trusted partners. Not all partnerships need to be publicized, but it will lend some potential exposure to those that want. Not every detail of Open Referral partnerships needs to be present, but operating in the open, being as transparent as possible will help build trust in a potentinally competitive environment.
There will be some HSDS API implementations, as well as potentially web or mobile applications that are developed by Open Referral, with some developed and operated by partners. Whenever possible, being transparent about this will help build trust, and reduce speculation around the organizational mission. Formalizing the approach to platform partnerships, that help set a positive tone for the community, and go from just site, to community, to a true platform.
I wanted to explore some of the services that will be needed in support of the Open Referral format specification, open source software development, as well as specific implementations. Not all of these services will be executed by Open Referral, with partners being leveraged at every turn, but it will also be important for Open Referral to develop internally capacity to support all areas, and as many types of implementations as possible. This internal capacity will be necessary to help move the specification forward in a meaningful way.
Here are some of the main areas I identified that would be be needed to help support core API implementations, as well as some of the web and mobile applications implementations that will use HSDS.
- Server Side
- Deployment - The deployment of existing or custom server implementations.
- Hosting - Hosting and maintenance of server implementations for customers.
- Operation - The overseeing of day to day operations for any single implementation.
- Data Services
- Acquisition - The coordination, access, and overall acquisition of data from existing systems.
- Normalization - The process of normalization of data as part of other data service.
- Deployment - The deployment of a database in support of implementation.
- Hosting - The hosting of database, APIs, and applications in the support of implementations.
- Backup - The backing up of data, and API, or application as part of operations.
- Migration - The migration of an existing implementation to another location.
- Development - The development of an application that uses an Open Referral API implementation.
- Hosting - The hosting of a web or mobile application that uses an Open Referral API implementation.
- Management - The management of an existing web or mobile application that uses an Open Referral API implementation.
- UI / UX - There will be the need to create graphics, user interface, and drive usability of end-user applications.
- Developer Portal
- Deployment - The demo portal can be used as base, and template for portal deployment services.
- Management - Handling the day to day operations of a developer portal.
- Registration - Registering for the domains used as part of implementations.
- Management - Running the day to day management of DNS for implementations.
- App Monitoring - The monitoring of apps that are deployed.
- API Monitoring - The monitoring of APIs that are deployed.
- API - Initial, and regular evaluation of the security of the API.
- Application - Initial, and regular evaluation of the security of applications.
In some of these areas I want to offer API Evangelist assistance as a partner, while in others I will be looking for partners to step up. I will also be looking at what cloud services, or open source software can assist in augmenting needs in these service areas. These are all areas that Open Referral will not be able to ignore, with many projects needing a variety of assistance in any number of these areas. Ideally Open Referral develops enough internal capacity to play a role in as many implementations as possible, even if it is just part of the platform storytelling, or support process.
What service providers will be used as part of operations? Throughout this project exploration I've mentioned the usage of Github, a potentially free, and paid solution to multiple service areas. I've listed some of the other common service providers I recommend as part of my API research, and would be using to help deliver some of my contributions to the platform, and specific projects.
- Github - Github is used for managing code, content, and project sites.
- Amazon - AWS is used as part of database, hosting, and storage.
- CloudFlare - Used for DNS services, and DNS level security.
- Postman - Applied as part of on boarding, testing, and integrating with APIs.
- 3Scale - A service that can be used as part of the API management.
- API Science - A service that can be used as part of API monitoring.
- APIMATIC - A service that can be used to generate SDKs.
For a well balanced approach I recommend that Open Referral strike a balance in the number of services it uses to operate the platform, and what it suggests for partners, and specific implementations. If possible, it would be nice to have one or more cloud services identified, as well as some potentially open source tooling that might be able to help deliver in the specific area.
Open Source Tooling
What tools will be used as part of operations? Complementing the services showcased above, let's explore some of the open source tooling that will be used as part of Open Referral platform operations. This should be a growing list, hopefully outweighing the number of cloud services listed above, providing low cost options to tackle much of what is needed to stand up, and operate an Open Referral, HSDS driven solution.
- Slate - A static, presentation friendly version of API documentation.
- Jekyll - An open source content management systems used for project sites.
I have only gotten started here. There are no doubt other open tools already in use, as well as some we should be targeting. What are these, what will they be used for, and do their licensing and support reflect the Open Referral mission. Each of these solutions should be forked, and maintained alongside other organizational developed or managed software.
HSDS is an open definition, built on the back of, and supporting other existing open definition formats. Let's showcase this heart of what Open Referral, and HSDS is, by providing an update to date list of all open definition formats, and standards in use.
- OpenAPI Spec - An open source, JSON API definition format for describing web APIs.
- APIBlueprint - An open source, Markdown API definition format for describing web APIs.
- MSON - An open source, markdown data schema format.
- JSON Schema - An open source, JSON data schema format.
- The Alliance of Information and Referral Systems XSD and 211 Taxonomy
- Schema.org - Civic Services Schema (at the W3C)
- The National Information Exchange Model - via the National Human Services Information Architecture - logic model here.
Open source software, and open definitions are the core of Open Referral. The goal is to provide open formats, APIs, data, and tools that can be easily replicated by cash strapped municipalities, government agencies, and other organizations. However software development, and operation takes money, and resources, so there will be a monetization aspect to Open Referral, which will need to be explored, and planned for.
I wanted to take what I've learned in the API sector, and put towards the evolution of a monetization framework that can applied across the Open Referral platform, down to the individual project level. Most monetization planning will be at the project level, with some of these considerations when it comes to thinking of generating revenue.
- Acquisition - What does it cost to get everything together for a project from first email, right before development starts.
- Development - What person hours, and other costs associated with development of a project.
- Operations - What goes into the operation of APIs, portals, and other applications developed as part of integration.
- Direct Value
- Services - What revenue is generated as part of services.
- Grants - What grants have been received, and being applied to projects.
- Investment - What investments have been made for platform projects.
- Indirect Value
- Branding - What branding opportunities are opened up as part of operations.
- Partners - What partnerships have been established as part of operations.
- Traffic - What traffic to the website, project sites, and other properties.
- Internal - What internal reporting is needed as part of platform monetization?
- Public - What reporting is needed to fulfill public needs?
- Partners - What partner reporting is needed as part of the program.
- Investment - What reporting is needed for investors?
- Grants - What grant reporting is required for grants.
Most of these areas will be applied to each project, but no doubt will need to be rolled up and reported, and understand across projects, as well as by other areas listed above. Open Referral will not be a profit driven platform, but will be looking to revenue generation to not just develop the open specification further, but also push for the development of open tooling, and other resources.
Monetization strategies applied to Open Referral will heavily drive the plans for API access that are applied to each individual implementation. While not everything will be standard across HSDS supporting implementations, there should be a base set of plans for how partners can operate, and generate their own revenue to support operations.
Platform API Plans
What are the details of API engagement plans offered as part of operations? I wanted to explore the many ways that leading API platforms open up access to their resources, and hand pick the ones that made sense for a minimum set of plans that could be inherited by default, within each implementation. Of course, each potential implementation might be different, but these are some of the essential platform plan considerations.
- Public - What are the details of public access.
- Commercial- At what point does access become commercial.
- Sponsor - How much access is sponsored by partners?
- Partner - Which plans are only available to partners?
- Education - Are there educational and research access?
- Time Frames
- Seconds - Resources are restricted by the second.
- Daily - Resources are restricted by the 24 hour period.
- Monthly - Resource access is reported on my monthly timeframe.
- Calls - Individual API calls are measured.
- Support - Support time is measured.
- Writes - The ability to write data to platform is measured.
- Country - In country deployment opportunities are available.
- On-Premise - On-premise options are available for deployment.
- Regions - The deployment in predefined regions are available.
- Range - API access limitations are available in multiple ranges.
- Minutes - Support access is limited in hours
- Hours - Support access is limited in hours.
- Endpoints - There are access limitations applied to specific API paths.
- Verbs - There are access limitations applied to the method / verb level.
While it is ideal that HSDS implementations provide public access to the vital resources being made available, it is not a requirement, and some implementations might severely lock down the public access elements of the platform. Regardless, all of the items listed should be considered, when one to five separate API access plans. The plans should cover hard infrastructure costs like compute, storage, and bandwidth, while also providing other commercialization opportunities that support revenue generation as well.
These are mostly the resources that currently exist on the public website, but I wanted to also make sure and provide other details about the organization, the team behind the efforts. These are a few other resources that shouldn't be forgotten.
- FAQ - Providing an organized list of the frequently asked questions for the platform.
- History - Provide the history necessary to understand the background of the project.
- Strategic - What are the strategic objectives of the organization and specification.
- Technical - What are the technical details of the organization and specification.
- Organization - Description of the organization.
- Team - Description of the team involved.
- Specification - Description of the HSDS.
I can keep adding to this list, but I think this represents a pretty significant v2 presence for Open Referral, as well as the Human Services Data Specification (HSDS) format. This isn't just a suggested proposal. I needed to think about what was needed, and what is next to help support projects on table, and proposals that in the works for specific implementations. I couldn't think about any single project without exploring the big picture.
Now I'm going to share this with Greg Bloom, the passionate champion behind Open Referral, and HSDS. I just needed to make sure everything was in my head, in support of our discussion in person tomorrow. We'll be looking to move the needle forward on this vision, in conjunction with the implementations on the table. Exploring the big picture on my blog, is how I put my experience on the table, working through all of its moving parts, and make sure I've covered all the ground I need to discuss.
What Does The Road Map Look Like?
Greg and crew are in charge of the road map. I just need to get more intimate with the specification. I have already created a v1 draft, scraped from the Slate documentation for the existing Ohana API implementation, using OpenAPISpec. I have the PDF documentation for an Open Referral partner to convert to a machine readable OpenAPI Spec as well. The process will help me further build awareness around the specification itself. This post has helped me see the 100K view, crafting the OpenAPI Spec will help me dive deep down into the weeds of how to deliver a human services API using the HSDS standard.
A Model For Human Services API And Hopefully Other Public API Services
I'm pretty stoked with the potential for working on Open Referral, and honored Greg has invited me to participate. This is just a first draft, tailored for what I would like to see considered for Open Referral / HSDS API, and for a couple of immediate implementations. However the model is something I will keep evolving alongside this project, as well as a more generic blueprint for how public service APIs could possibly be implemented.
There are several other API implementations that have come across my desk, which I've felt a model like this should be applied. I was thinking about applying this to the FAFSA API, to help develop a student aid API community. I also thought it could be applied around the deployment of the RIDB API, in support of our national park system. In both of these environments a centralized, common, open API definition, with supporting schema and dictionaries, and a healthy selection of of open source server, and client side web or mobile app implementations, would have gone a long ways.
Anyways, I have what I need in my head so that I can talk with Greg, and coherently discuss what could be next.
As part of a renewed focus on the API discovery definition format APIs.json, I wanted to revisit the propsed machine readable API discovery specification, and see what is going on. First, what is APIs.json? It is a machine readable JSON specification, that anyone can use to define their API operations. APIs.json does not describe your APIs like OpenAPI Spec and API Blueprint do, it describes your surrounding API operations, with entries that can reference your Open API Spec, API Blueprint, or any other format that you desire.
APIs.json Is An Index For API Operations
APIs.json provides a machine readable approach that API providers can put work in describing their API operations, similar to how web site providers describe their websites using sitemap.xml. Here are the APIs, who are describing their APIs using APIs.json:
APIs.json Indexes Can Be Created By 3rd Parties
One important thing to add, is that these APIs.json files can also be crafted, and published by external parties. An example of this is with the Trade.gov APIs. I originally created that APIs.json file, and coordinated with them to eventually it get published under their own domain, making it an authoritative APIs.json file. Many APIs.json files will be born outside of the API operations they describe, something you can see in my API stack project:
- The API Stack - Provides almost 1000 APIs.json files, that describe the API operations of many leading public API platforms. There is also around 300 OpenAPI specifications, for some of the platforms described
APIs.json Can Be Used To Describe API Collections
Beyond describing a single API, within a single domain, APIs.json can also be used to describe entire collections of APIs, providing a machine readable way to organize, and share valuable collections of API resources. Here are a few examples of projects that are producing APIs.json driven collections.
- Defining APIs that you depend on for organizational operation.
- Defining a specific category of API operations, using the format.
- SMS - http://sms.stack.network/
- MMS - http://mms.stack.network/
- Email - http://email.stack.network/
- News - http://news.stack.network/
APIs.json Can Be Used To Describe Collections of Collections
Then taking things up another rung up the chain, APIs.json can also provide a collection of collections, something I do with my own APIs. Each Github organization on my network has a master APIs.json, providing include links to all other APIs.json within the organization. In this scenario I have over 30 other APIs.json indexed, which can all operate independently of each other, but can also be considered a collection of API collections.
- Master - A master collection of API collections I maintain as part of the API Evangelist network operations.
The First Open Source Tooling For APIs.json
Up until now, this post is all about APIs.json, where in reality the format is useless without their being any tooling built on top of the specification, bringing value to the table. This is why the 3Scale team got to work building an open source APIs.json driven search engine:
- APIs.io as an open source tool dedicated to APIs.json
- APIs.io as a public API search engine, with APIs.json as index.
- APIs.io as a private API search engine, with APIs.json as index.
APIs.json Driving Other Open Tooling
APIs.io is just the beginning. It won't be enough to convince all API providers that they should be producing APIs.json index of their site operations, just for the API discovery boost. We are going to need APIs.json driven tooling that will service every other stop along the life cycle, including:
- HTTP Client / Hub / Workbenches
APIs.json Integrated Into Existing Platforms
What areas would you like to see served? Personally, I would like to have the ability to load / unload my APIs.json collections into any service that I use. Allowing me to organize my internal, public, and 3rd party APIs I depend within any platform out there that is servicing the API space. Here are a handful of those types of integrations that are already happening:
- WarewolfESB - ESB integration and API discovery.
- SwaggerHub - Public and private API hub discovery.
- API Management - In Progress w/ 3Scale...
- API Monitoring - In Progress with API Science...
- API Change Log - In Progress with API ChangeLog...
- SmartBear - API discovery for monitoring, testing, virtualization, and security.
- API Evangelist - API analyst operations.
- Kin Lane - API factory operations (not organic)
- Adopta.Agency - Government open data publishing.
APIs.json Linking To The Human Aspects Of API Operations
APIs.json is just the scaffolding to hang links to essential aspects of your operations, it doesn't care what you link to. You can start by referencing essential links for your API operations like:
- Signup - How to signup for a service.
- Support - Where to get support.
- Terms of Service - Where are the terms of service.
- Pricing - Where to find the pricing for a service.
APIs.json Linking to Machine Readable Aspects of API Operations
These do not have to be machine readable links, they can reference important things the humans will need first. However, ultimately the goal is to make as much of the APIs.json index as machine readable as possible, using a variety of existing API definition formats, available for a variety of purposes.
- OpenApI Spec, for API description.
- API Blueprint, for API description.
- API Common, for API licensing.
- Postman, for run-time.
Defining New, Machine Readable Property Elements For APIs.json
While the APIs.json spec will evolve, something I talk about below, its real strength lies in its ability to incentivize the development of entirely new, machine readable API definitions, bringing even more value to the API discovery process. Here are a few of the additional specs being crafted independent of, but inspired by APIs.json:
- API Plans, for pricing, plans & rate limits.
- API Monitoring, for monitoring & testing.
- API Changelog, for operational monitoring.
- API SDK, for SDK reference.
- API Conversations - for the stream around API operations
Roadmap for Version 0.16 of APIs.json
That is the 100K view of what is APIs.json now, and the short term plan for the future. Most of the change within the universe APIs.json is mapping will occur add the individual API, and within the machine readable specs that describe them like OpenAPI Spec, API Blueprint, and Postman. Secondarily, there will be additional, machine readable, API types being defined and added into the spec.
Even with this reality, we do have a handful of changes planned for the 0.16 version of APIs.json:
- commons - Establish a top level collection of common property elements that apply to ALL APIs being referenced in an APIs.json
- country - Adding a top level country reference using ISO 3166.
- New Proper Elements - Suggesting a handful of new property elements to reference common API operation building blocks
I doubt we will see many new additions like commons and country. In the future most of the structural changes to APis.json will be derived from first class property elements (ie. adding documentation or Github), making this the proving ground for defining what are truly the most important aspects of API operations, and what should be machine readable vs human readable.
The Hard Work That Lies Ahead for APIs.json
That concludes defining what is APIs.json, and what is next for APIs.json. Now we really have to get to work, doing the heavy lifting around:
- Getting more API providers to describe their API operations using APIs.json, and publish in the root of the domain for their API ecosystem.
- Encourage more API evangelists, brokers & analysts using to describe their collections, using APIs.json, building more meaningful indexes and directories of high value APIs.
- Encourage platforms to build APIs.json into their operations, as a storage and organization schema, but also as import / export format.
- Incentivize the development of more meaningful tooling that employs APIs.json, and uses it to better serve the API life cycle.
- Continue to add new API property elements, making sure as many of them as possible evolve to be machine readable, as well as first class citizens in the APIs.json specification.
You can stay involved with what we are up to via the APIs.json website, and the APIs.json Github repository. You can also stay in tune with what is going on with APis.io via the website, and its Github repository. If you are doing something with APIs.json, ranging from using it as an index for your API operations, to platform integrations, please let me know. Also, if you envision some interesting tooling you'd like to see happen, make sure and submit a Github issue letting us know.
While we still have huge amounts of work to do, when it comes to delivering meaningful API discovery solutions that the industry can put to work, I am pretty stoked with what we have managed to do over the last two years of work on the APIs.json specification, and supporting tooling--momentum that I feel picking up in 2016.
Each of the Adopta.Agency project runs as a Github repository, with the main project site running using Github Pages. There are many reasons I do this, but one of the primary ones is that it provides me with most of what I need to provide support for the project.
Jekyll running on Github pages gives me the ability to have a blog, and manage the pages for the project, which is central to support. Next I use Github Issues for everything else. If anyone needs to ask a question, make a contribution, or report a bug, they can do so using Github Issues for the repository.
I even drive the project road map, and change log using Github Issues. If I tag something as roadmap, it shows up on the road map, and if I tag something as changelog, and it is a closed issue, it will show up on the change log--this is the feedback loop that will help me move the clinical trials API forward.
The next step for this project is to actually set up the clinical trials data as an official Adopta.Agency project, by forking the Adopta Blueprint, and customize it specifically for managing the work around this clinical trials data set. I create a new repository, and replicated the blueprint, and get to work changing everything in the central YAML file.
I can pretty much customize everything about the project, directly in the _config.yml file. I update the title, project description, as well as the description for data and API sections. I turn off some of the features that I do not think we are ready for like the showcase. My primary objective is to get the clinical trial data available as CSV and JSON, get a version one of the API up and running, with a blog to tell the story, and have a road map and change log available to help keep track of the project--I will worry about the rest down the road.
For me, the blog is one of the most important tools in this toolbox. It is these blog posts that help me think through the work I'm doing, communicate with other people who are interested in learning about the project, while also generated vital SEO that will help drive other interest in the work. The goal is to record every little bit of work I put into the project here, and ecnourage others to come and engage with the project via its Github repository.
One of the benefits of doing an API, is so that you can take advantage of the potential for a community feedback loop, driven by internal groups, external partners, and even in some cases the pubic. Under my API management research, you can find a number of building blocks I recommend for helping power your feedback loop, but sometimes I like to just showcase examples of this in the wild, to show how it all can work.
I was reading the Letter from our co-founder: 2016 Product vision, from Electronic Health Record (EHR) provider Practice Fusion, and I thought the heart of the blog post, represented a well functioning feedback loop. As Practice Fusion looked back over 2015, they noted the following activity:
- 798 product ideas submitted with over 3,000 comments.
- 64 community ideas already delivered as product features.
- 45 new ideas currently being worked on.
Acknowledging that this feedback "powers everything we do". They continue listing some of the major customer requests that were fulfilled in 2015, and close the letter with an "eye towards the future". It is a simple, but nice example of how a platform's community can drive the platform road map. Sure a lot of this comes from the SaaS side of the Practice Fusion operations, but it works in sync with the Practice Fusion developer community as well.
The lesson from this one, to include in my overall research, is to always have a way to collect feedback from the community, tag ideas for discussion as potential additions to the road map, and carefully track on which ideas get delivered, and which ones end up being included in the road map. This is something that is pretty easy to do with Github Issue Management, which I use for my own personal projects, to drive both my road maps, and resulting change logs.
This post also pushed me to begin tagging these types of stories, organizing them into a simple API playbook, full of API platform how-tos, like this product vision letter from co-founder of Practice Fusion.
This is a review of the communication API platform CallFire, crafting a snapshot of platform operations, from an external viewpoint, providing the CallFire platform team with a fresh take on their API from the outside-in. The criteria applied in this review is gathered from looking at the API operations across thousands of API providers, and aggregating best practices, into a single, distilled review process.
This review has been commissioned by CallFire, but as anyone who's paid me for a review knows, money doesn't equal a favorable review—I tell it like I see it. My objective is to help CallFire see their platform through a different sense, as developers might see their platform. It can be hard to see the forest for the trees, and I find the API review is a great way to help API providers see the bigger picture.
I prefer to share my API reviews in a narrative format, walking through each of the important aspect of API management, telling the story of what I see, and what I don't see. Here is the story of what I found while reviewing the communication API platform for CallFire.
CallFire makes for an easier review than some API operations, because the API is the product. As soon as you land on the homepage of the website, you begin your API journey as a potential API consumer. The first thing you read is "Over 2 Billion Messages Delivered”, so you immediately understand what CallFire does, and the next thing that grabs your eye is “Signup For Free”—way to not miss a beat, CallFire.
Next you see five simple icons, with simple text, breaking what CallFire does down: Text Messaging, Call Tracking, Video Broadcast, Cloud Call Center, IVR. Within the first five seconds you fully understand what is being offered, and given the opportunity to sign up. If that is not enough, you are also told the reasons why: Engage Your Customers, Save Valuable Time, Increase Revenue.
After you look at thousands of APIs, nothing is more frustrating than to have to figure out what an API does. CallFire gives me what I need, in five seconds or less, without clicking or scrolling. This is the way all APIs should be, if not the homepage of website, then the landing page of the API developer portal. The main page of the CallFire website is well designed, and organized in a simple, and robust way, giving you one-click access to everything you need to get going with the platform--no other feedback required.
This is the part of the review I dive into the actual design of the API, and provide some feedback on the technical endpoints of the APIs themselves. CallFire is unique because it has a REST and a SOAP version of the API available, which I think is important in today's business climate, where APIs are targeting open developers, as well as those within the enterprise.
The CallFire API is very robust, with wide range of endpoints / methods for the most basic text and call features, all the way up to campaign, subscription, and contact management. You can tell the system is well thought out, providing a full suite of communication resources for all types of developers.
On you dig into the REST API you begin to see quite a bit of SOAP residue, and while the API has a well thought out list of endpoints, many elements of the parameters, requests, and responses feel SOAPy, including the XML responses. There is also a lack of consistent response codes, and defined data model, giving the REST API an unfinished, empty bottom feeling.
Overall I give the API a solid B in that it is a very robust stack, but I'd say it needs some hard use, and integrations before some of the rough edges are hammered off, parameters become more intuitive, and request and response structures normalize. Much of this just comes with usage, and requires getting closer to real world use cases, end users, before it becomes more of an experience based design vs. the resource based design it currently is.
I could easily go through the entire Swagger definition for CallFire and make recommendations on naming conventions, and help craft the resource definitions for the underlying data models, but this is best worked out with the community, iterating, and communicating with developers, and learning more about truly what they need. Think of it as kiln firing of the API, through developer execution, and robust platform feedback loops.
On-boarding with the CallFire API was frictionless. I went one click from home page, authenticated with my Google Account, and immediately I was dropped into my account dashboard, with a helpful intro screen showing me where things are. I easily found the area in my account to add an application, and get my API keys, then stumbled into the overview of how to activate your account as well—account management was intuitive.
The intuitive and informative CallFire home page made the API easy to find, and with frictionless account signup, and a standard API app management, I was ready to make my first call on the API within 10 minutes. The only thing I would consider adding as part of the process, is an option for signup and login using my Github credentials, in addition to Facebook, and Google.
On-boarding with an API is often the most frustrating part of API integration, and it wasn't something I worried about at all with CallFire. The process was intuitive, smooth, and didn't leave me trying to understand what the API does, and how I am supposed to make it work. Solid A on the on-boarding process for the CallFire API.
Documentation is one of the most critical aspects of API integration, making or breaking many integration efforts by developers. CallFire has double duty, in that it needs to provide documentation for the REST and SOAP version. Somethig CallFire manages to deliver with no problem, providing clean, easy to follow documentation for both APIs they offer.
The SOAP API document provides a simple breakdown of operations and methods, with easy to follow descriptions for everything. Ultimately the SDKs do most of the heavy lifting here, but the SOAP docs provide a nice overview of the CallFire platform.
The REST API for CallFire is defined using the machine readable API definition format Swagger, and uses Swagger UI to generate documentation, making learning about the API more interactive. Swagger provides a machine readable overview of the CallFire API, and is an approach to delivering API documentation that keeps pace with modern approaches.
I do not have much feedback for the documentation side of CallFire, I'd like to see more information about the underlying data model described in the Swagger definition, as well more detail about the response codes, but I give the platform documentation a solid B for being simple, clean, and complete--the API just needs some hardening, and the documentation will improve.
On-boarding with the CallFire API is frictionless, and adding an app, and finding your API keys are intuitive enough. The platform also provides a nice overview on how to enable the API on your CallFire account, but ultimately the topic of authentication is neglected.
Authentication for SOAP interactions, and REST for that matter, are abstracted away by each of the SDKs. However one of the elements of RESTful APIs is that authentication should be clearly defined as part of the documentation. I suggest adding a page, or section on an existing page that is dedicated authentication, explaining the BasicAuth used to secure the CallFire API. For any experienced API consumer it isn't difficult navigate, but the app manager with login and password, has appearance that it may be app key via a header or parameter, not BasicAuth--a dedicated authentication overview page would help clear up.
As part of authentication review I do not usually advocate for a specific authentication approach, when the choices are BasicAuth, or using the app id and keys in the header or parameters. The best option is to pick one, be consistent, and explain it clearly, on a page that stands out. Overall the authentication for CallFire is intuitive, it just needs a little bit of information to makes things 100% clear when you first find yourself making that first API call.
UPDATE: Since I wrote this review, and the time of publishing, CallFire has updated their portal to include a well formed authentication overview for the platform.
After documentation, code samples, libraries, and SDKs are key to a painless API integration. For the CallFire API, there are only two SDKs available currently, for the PHP, and .NET platforms. It is common that platforms who are just getting going, only have a handful of SDKs in specific language, and is forgivable, but is also a sign of platform immaturity (aka lots of work still to be done).
Another aspect of SDK design for CallFire, that I'd like to bring up, is the cohabitation of REST and SOAP in a single SDK. I'm not sure this type of cross pollination is ideal for all integrations. Maybe it is just my architectural style, but I like seeing each SDK be as smallest possible footprint as possible, and meet the needs of developers without any extraneous bloat.
Moving beyond SDKs, into what I call Platform Development Kits (PDK), CallFire does well in providing two distinct platform plugins for WordPress and Drupal. I recommend bringing these PDKs to the surface, and showcase them in a full SDK and PDK showcase page—showing what is possible. Maybe even considering the next step, of what is the 3rd PDK that could be developed? SalesForce? Heroku?
The usage of Github by CallFire is another important signal, showing the platform is progressive, and something that developers can depend on. I recommend further bringing Github into the site, linking to accounts, providing direct links to SDK and PDK repositories from an official page, and add Github authentication for developers to be able to create and manage their accounts. Github isn't just about code, it is a potentially important social layer to the CallFire API ecosystem.
There really is no evidence of any mobile SDKs, or information for mobile developers available on the CallFire platform. It is common to find entire sections dedicated to mobile developers, or at least links to mobile platform specific code libraries. I recommend establishing a mobile focused section of the platform, and invest the resources necessary to help developers build iOS, Android, and Windows mobile applications using CallFire.
CallFire is doing well on the support front, providing building blocks for both direct, and self-service support. I like to see a mix of support services that developers can find on their own, getting the help they need 24/7, but they also need to be able to get direct support when they get stuck.
When it comes to direct support, CallFire is rocking it, with a support email, live chat, phone number, contact form, and ticketing system tied to your account messaging area. The only additional things I could recommend CallFire offering is paid support plans, allowing developers to pay for one-on-one support via chat, online hangout, or other means.
With self-service support, there is the CallFire FAQ, which I'd call more a knowledgeable than a simple FAQ, providing a wealth of knowledge about the platform. The only common element I see missing is more of a community element with a forum, or usage of Stack Overflow to engage with the wider developer community. The current FAQ is very robust, and with the integrated ticketing system, the potential is great, but it is all missing that community piece.
Overall, CallFire support is as robust as you'd expect from any API platform. When you combine with the social media presence the platform has, that I'll cover as part of the communication strategy, the platform has all of its bases covered. A+ on support effort.
When it comes to my review criteria for communications, other than a newsletter, CallFire nailed everyone of them. The platform has all of the expected social media platforms, has an active blog with RSS feed, and is very accessible with email, phone, and chat. All of this sends the right signals to the community, and potential API consumers, that someone is home. I have nothing to contribute when it comes to communications, as long as all channels are kept active, CallFire is doing everything it can in my mind.
As platform providers, we are asking developers to depend on us, and integrate our resources into their applications, and businesses. That is asking a lot, and we need to provide as much information as possible about what the future hold, to hlep build trust with developers. There a handful of proven ways for doing this, established by leading API platforms.
- Roadmap - API roadmaps are usually a simple, bulleted list, derived from the APIs own internal roadmap, showing what the future holds for the platform. Transparency around an APIs roadmap is a tough balance, since you don't want to give away too much, alerting your competitors, but your developer ecosystem needs to know what is next.
- Status Dashboard - Status dashboards are a common way for API platforms to communicate the availability of an API, but also show the track record of a platform, helping developers understand the track record for an API they are to integrate with. There are several simple services, that help API providers do this, without investment in new tools and systems.
- Change Log - Knowing the past is a big part of understanding where things are in store for the future. A change log should work in sync with the API roadmap building block, but provide much more detailed information about changes that have occurred with an API. Developers will not always pay attention to announced updates, but can use a change log as a guide for deciding what changes they need to make in their applications and how they will use an API.
Sharing the change history of a platform, a roadmap to the future, and a status of API operations at the moment go a long way in help build trust with developers. Transparency in the development of any platform, is essential in helping developers feel comfortable that a platform will be around to support their needs, and is worthy of their time.
When reviewing APIs, the overall business model is usually one of the most incomplete aspects of operations, in my experience. This is ok, as many platforms are still figuring this out, however this is not the case with CallFire. The business model for the platform isn't just well defined, it provides me with an example to use when helping other API providers visualize what is possible.
The pricing page for CallFire is clean, well thought out, and provides sensibles tiers of operation, with clear units of measurement, letting me know what I get for each level. I can easily upgrade the access tier directly from my account settings, and I can get volume pricing if needed. This is how APIs should work, allowing me to easily calculate what I'll need, and figure out which tier I will be operating within, complete with a self-service option for scaling as I need, paying for what I use, as I go.
The billing management and credit system for CallFire is superior to most of even the most thought out API billing and pricing models. It is clean, well thought out, and makes sense from a user perspective—which is the most important aspect. I'll be using CallFire credit system as a reference when I talk about how the platforms should build tooling that supports the underlying API business model.
I can't articulate enough, how well done the business model, pricing, supporting billing and the other business elements of the CallFire API.
When it comes to available support resources, it is another area CallFire does very well. The platform has heavily invested in case studies, videos, webinars, and a tour of the platform. They even have a communications and marketing glossary developers can use to get up to speed. CallFire does a good job of providing valuable resources to get developers quickly understand all aspects of platform operations..
A couple of areas I could provide suggestions for improvement, in is when it comes to more industry level white papers, and when the evangelism side of things kicks in maybe consider posting slide decks from events CallFire presents at, as well as a calendar of interesting events. These things will happen I'm sure once an API evangelism strategy is kicked into full gear, but for now just keep doing more of the same--providing lots of rich resources for devs.
Research & Development
I'd file R&D in the same category as mobile, non-existent. API ecosystem are essentially external R&D labels for companies, and general operations are about exploring ideas of what can be build with an API, but it helps to have some element available to stimulate overall R&D via the platform.
Some of these elements are:
- Idea Showcase - A place the community can share ideas of what could be built with the CallFire platform.
- Labs Environment - A workbench showing what CallFire is working on when it comes to their own integration.
- Opportunities - Available opportunities to build things like SDKs, PDKs, or specific projects.
These are just three things that help stimulate the innovation around an API. Sometimes developers just need something to spark the imagination, or possibly see an existing labs project to help them see something in their own work. These rich R&D environments can provide a great opportunity to help meet the needs of CallFire, and its partners.
A couple of items I'd recommend also considering, based upon what I seen on other platforms:
- Code License - The PHP SDK has an MIT license, but the .NET didn't have anything. A centralized code licensing page could help as well.
- API License - A license for the API itself, applied to the REST API interface that is defined by Swagger, using API Commons format.
- Service Level Agreement (SLA) - Provide a service level agreement for API consumers to take advantage of, and understand service level commitments.
- Branding - There are no branding or style guidelines with support resources like logos, etc—missed opportunity for spreading word, and steering developers in the right direction.
Just a couple of things to think about. All of these would go a long way in building trust with developers, and the branding thing is a huge missed opportunity in my opinion. When you bundle these with the TOS, privacy, and compliance information already provided by the platform, it would round off the legal department of the CallFire API nicely.
Embeddable tooling is another area that is non-existent for the CallFire API. There are no embeddable tools like widgets, buttons, etc that allow the average end-user, and developer to put the API to use in web pages, and applications. I'm not sure what an embeddable suite of tools would look like for CallFire, that would need to be a separate brainstorming process.
When it comes to communication platforms, especially ones involving media, and deep social interactions, embeddable tools are a proven way to grow platform, expand the network effect, and potentially bring in new developers. I recommend including an embeddable section to the site, with a handful of embeddable tooling to compliment the SDK and PDK resources already available.
One area I consider when looking through API operations, is the environment itself. By default many APIs are live, ready for production use, but increasingly platforms are employing alternate environments for development, QA, and potentially variant product environment.
- Sandbox - With the sensitive information available via many APIs, providing developers a sandbox environment to develop and test their code might be wise idea. Sandboxes environments will increase the overall cost of an API deployment, but can reduce headaches for developers and can significantly reduce support overhead. Consider the value of a sandbox when planning an API.
- Production - When planning an API, consider if all deployments need to have access to live data in real-time or there is the need to require developers to request for separate production access for their API applications. In line with the sandbox building block, a separate API production environment can make for a much healthier API ecosystem.
- Simulator - Providing an environment where developers can find existing profiles, templates or other collections of data, as well as sequence for simulating a particular experience via an API platform. While this is emerging as critical building block for Internet of Thing APIs, it is something other API providers have been doing to help onboard new users.
- Templates - Predefined templates of operation, when a new environment is setup, either sandbox, production, or simulator, it can be pre-populated with data, and other configuration, making it more useful to developers. These templates can be used throughout the API lifecycle from development, QA, all the way to simulation.
This approach to delivering an environment for the CallFire API is not essential, but I could see it providing some interesting scenarios for communication campaigns, and the deployment of messaging infrastructure in a containerized, Internet of Things rich environment. Deployment CallFire communication infrastructure should be as flexible as possible to support the next generation of Internet enabled communication, both in the cloud and on-premise.
An often overlooked aspect of API operations is the tools provided to API consumers. CallFire is in a fortunate position as the API is core to their product, and the API integration is an extension of a primary CallFire user account. The account area for CallFire is well done, clean, and gives users, and those who choose to be API consumers, quite a bit of control over their communication infrastructure.
CallFire nailed almost every API account area I like to see in any API platform:
- Account Dashboard - The dashboard for CallFire account is well done, and information.
- Account Settings - CallFire provides a high level of control over account settings.
- Reset Password - Resetting password is important, and I like to highlight separately.
- Applications - The app management for CallFire is on par with rest of industry.
- Service Tiers - The ability to change service tier, and scale is pretty significant.
- Messaging - An import part of the communication and support strategy of platform.
- Billing - Essential to the economy portion of platform operations, well executed.
There are a couple of areas I'd like to see, to round off developer account operations:
- Github Authentication - It would fit nicely with Facebook, and Google Auth--I prefer authenticating for APIs with my Github.
- Delete Account - Maybe it is in there, but I couldn't find it. The ability to delete an account is important in my book.
- Usage Logs & Analytics - I'd like to see application specific analytics like on the dashboard, showing usage per app.
- Account API - Allow API access to all account operations, allowing access to account settings, usage limits, billing, messaging and other areas.
The CallFire account management for users and developers is much more robust than I see in many of the APIs I review. Like I said before the monetization portion is something to be showcased, and all the most important aspects of account management for API operations are present. It wouldn't take much to round off with a couple more features, some more analytics, and an account management API would really take things to the next level.
I always enjoy when I come find consistent design, and function across API operations. This is what I found with CallFire. The API isn't an afterthought like other platforms, it is their product, and the site design, messaging, and content are consistent across the platform.
The API design is consistent, and the supporting documentation is as well. The only thing I'd add is design patterns across the SOAP and REST API should be less consistent, and stay true to their own design constraints. The details of the REST could be tightened, to be more consistent in how parameters are formed, and response formats, and error code are commonly presented.
Usually when reviewing APIs, I look for fractures between API operations, like clunky UI between website sections, or incomplete documentation, often created by disparate teams. This doesn't exist with CallFire, and while there are many details that could be cleaned up, the consistency is all there.
This is a term thrown around a lot in the space, and very seldom do sites live up to it. There are many things that contribute to whether or not an API is truly open. CallFire delivers on all of the important areas, making open a word I'd apply to CallFire.
One of the things I think heavily contribute to the openness of the CallFire platform is the business model. The monetization strategy is well formed, with pricing and service tiers well defined. You know what things cost, and how to get what you need. This type of approach eliminates the need for other extraneous rate list, or restrictions—this type of balance is important to truly achieving openness.
After this review I'd call CallFire an open API, but only time will tell, if the platform is also stable, support channels are truly supportive, and other aspects of open that only CallFire can deliver on. For right now I consider them open for business, and open for developers, but ultimately whether or not CallFire is willing to share this review, will put the exclamation on the platform openness definition, won't it! ;-)
The usual footprint you'd see when an API platform has an active evangelism program doesn't exist for CallFire, but that is part of the motivation of this review. We are looking to take a snapshot of where the platform is at, in hopes of providing much needed input for the roadmap, as well as establish a version 1.0 evangelism strategy--we will revisit the evangelism portion of this review in a couple months.
In short, the CallFire passes my review. There are several key areas like mobile, and roadmap communication, that are missing from the platform entirely, but then there are other areas CallFire nails almost every one of my review criteria. The API is robust, the documentation is complete, and they provide all the essential support building blocks.
One of the things that really stand out for me is the CallFire business model, something that I think really cuts through the BS of many APIs I look at. CallFire has a clear business model, and the tools to manage your API usage. There is no grey area with the business of model for CallFire, which is something I just don't see a lot of.
I'd say my biggest concern with the platform is the lack of diverse code resources. I can't tell if they are just getting going, or maybe the lack of developer skills is slowing the diversity of available coding resources--I am just not sure. My guess is their is a lack of diverse developer skills on staff, which explains the lack of mobile SDKs, and the SOAP residue on the REST API. My advice is to invest in the developer resources necessary to load the platform up with a wide variety of coding resources that developers can put to work in their projects.
Beyond the code resources, it is really just a bunch smaller items that would bring the platform into better form. CallFire definitely reflects everything I'm looking for in an API platform, and is something I've included in the top APIs I track on as part of my API Stack. Additionally, I've gathered a couple of other stories while doing this review, including the overall monetization strategy, the notification system under account settings, and their usage of Swagger—which is always another good sign of a healthy platform, and a positive review.
Lots going on with the CallFire platform, I recommend taking a look at what they are up to.
This was a paid review of the CallFire platform, if you'd like to schedule a review of your platform, please contact me, and we'll see if I can make time. A paid review does not equal a good review. it is my goal to give as critical, and constructive feedback as I can to help API providers enhance their roadmap and better serve their consumers.
I am currently trying to move forward the 917 companies, from 223 business areas, with a total 882 APIs catalogued, and 407 Swagger definitions created, while working on a distributed way to understand where the profiling for each company is, and how far in defining this in a machine readable way. I had kicked off another prototype APIs.json type a few months back I'm calling api-questions to handle just this, in a way that allows me to ask human readable questions about APis, while also storing in a distributed, machine readable format that can be indexed via each APIs.json file.
I have long tracked on what public APIs are doing in a database. I keep links to Github and Twitter profiles, blogs to pricing, and terms of service. I started publishing this information to APIs.json for the companies I track on a while back. I have the information, and I'm working my way through thousands of APIs, trying to make sure there is a complete definition available, I needed an automated way to help make sure I'm asking the right quesitons consistently of each API.
So, in addition to my list of Swagger oriented questions, I've compiled a list of the most common APIs.json defined, API operations related questions that I ask:
- Is there an API? (the most important question of them all!)
- Is there an APIs.json File?
- Is there a blog?
- Is there a blog RSS feed?
- Is there a portal?
- Is there a platform description?
- Is there getting started information?
- Is there documentation for the API?
- Is there interactive documentation for the API?
- Is there an authentication overview?
- Is there self-service registration?
- Is there request for access?
- Is there code samples?
- Is there code libraries?
- Is there SDKs?
- Is there email for support?
- Is there an FAQ section?
- Is there a knowledge base?
- Is there a forum?
- Is there a Twitter account?
- Is there a LinkedIn account?
- Is there a Github account?
- Is there a pricing page?
- Is there a rate limit page?
- Is there a road map?
- Is there a change log?
- Is there a status page?
- Is there a terms of service?
To help me along, I've created a simple API for management of my API related questions, then I created another API for asking these questions of specific APIs.json files. However if you want to use this API, you need an APIs.json index for your APIs, which all of mine in the API stack do--then it will spider and do its best to answer the questions above, and return specific answers.
The results are far from perfect, but it is a start. I will be making the questions more precise, and adding new questions. My goal is to have a real-time way of telling how complete my APIs.json files are, and where the work is that needs to be done--as I am doing the work.
I'm sure the definition of exactly what is a complete APIs.json definition is will continue to evolve, always resulting in a human having the final vote, but for now I will just keep defining, until I find the right balance between programmatic, and the human touch.
In 2010 I started API Evangelist, as part of my effort to better understand the world of APIs. I was looking to not just the technology of how it was done, I saw there were plenty of people doing this. I wanted to better understand the business of APIs, and how popular APIs were finding success with their business models, developer outreach, and other aspects of API operations--going well beyond just the technology.
API Evangelist started simply as a blog. I was not a big fan of WordPress, as I knew it can be quite a security target, and as a platform, for a programmer like me, can be more difficult to get even the simple things done. With this in mind I started the first API Evangelist API, by following the same advice that I would be giving to my potential audience.
Quickly I needed some additional APIs, to keep track of some of the players I saw emerging across the landscape. In 2010, the most important piece of the puzzle, when it came to the business of APIs (after the APIs themselves) was the growing number of API service providers, who were popping up to deliver much needed services to API providers. To support my work I added a couple more API endpoints.
Tracking on APIs wasn't that critical, as it was something ProgrammableWeb already did, but I still preferred to track on some addition details for my own needs. When it came to deploying my own APIs, I kept the as simple as possible, using the backend that I knew best -- the LAMP stack. I was already running several websites on Amazon Web Services, so I chose to deploy my servers and database using Amazon, using a pretty proven formula for delivering a backend stack.
LAMP Stack - MySQL (RDS) + PHP / Linux (EC2)
PHP may not be the choice of API champions, but I was fluent in it, and I new that when the time came, and I open sourced the back-end for API Evangelist, that if everything was straight up LAMP stack, I would reach the widest possible audience. Now, with my base API infrastructure in place, I began designing, and deploying other APIs I needed to track on the world of APIs. I needed to track on some of the open source tools I was finding so I added a new endpoint.
After tooling, links were quickly becoming a big challenge for me. I needed a way to track on links to not just stories, but also events, white papers, presentations, and other resources that I was referencing in my research. I needed another API that I could use to store links from across the space.
Once my links API was in place, I also began using it for a number of other functions, showing me that I quickly needed more functionality beyond simply storing a title, description, and URL. As I was monitoring the space, I saw that I would needed a way to curate links to important stories across the fast growing space each week--resulting in me adding a new layer to my links API.
Beyond tracking on important links, and curating the news each week, I started to see that many of the links I placed in my blog posts were disappearing. The API space moves really fast, and many of the API companies that were being acquired or shutting down, were simply disappearing. I didn't like my readers stumbling across dead links, so I added another layer to my links APIs to help, which I called.
Part of the linkrot API operations, was taking a screenshot of each website referenced, so when it disappeared, I could easily replace it with a popup screenshot of what used to be there. To support this I was using a number of 3rd party screenshot APIs, which after using three separate ones, each shutting down, I eventually created my own screen capture API.
Beyond links, I was hitting on similar problems with my service provider API. Some of the companies I tracked on were not APIs or API service providers, I needed a way to track on other types of companies. At the same moment I realized I also needed a way to track the individuals who were also doing interesting things in the space, as well as the companies they worked at, resulting in the creation of two new APIs:
Similar to links, I created a new API that allowed me to centrally manage all the images I used. I store all my images on Amazon S3, but I needed a way to track not the URL, but also a plain english title, description, tags, and other metadata about images. I added another new API.
Then, similar to my link system, I began having more advanced needs for images, which required numerous endpoints to be added, allowing me to better manage the visual side of operations for API Evangelist. Images are critical to my storytelling, so I hand crafted exactly the APIs resources I needed to get the job done.
There are too many APIs to list as part of this story. Ultimately over the last five years, I've added an ever increasing number of utility APIs that help me manage data and content across the API Evangelist network. Most of these APIs were custom developed, but some simply provided a facade for other public APIs, or open source software I had installed on the server.
I was accumulating a pretty interesting stack of information, which I was using to power my own network, but I was the API Evangelist, without any public APIs. I wanted to open up my APIs to the public, and what better way to do it than to evaluating the API management providers I had been writing about, a process which resulted with me choosing 3Scale API Infrastructure. 3Scale had the features I needed like user management, analytics, and valuable service composition tooling, allowing me to craft different levels of access to my APIs for public usage, access by my partners, and of course for my own internal usage.
Using 3Scale I dropped an include file into my existing API stack, and with just a few lines of code I secured my APIs, and metered who had access, requiring registration for public or partner levels of access. To support this I launched a simple portal for the APIs I was making public. I didn't release all of my APIs, only the ones I felt the public would be interested in.
I kept on working on my infrastructure, adding an increasing number of endpoints, building on existing APIs, evolving my APIs to help me better monitor the API space, organize information, and craft the stories that I publish each week to the API evangelist network. Eventually I had hundreds of endpoints, some of them well planned, but many of them just thrown up to support some need I had at the moment.
The API Evangelist API stack was growing more unwieldy by the day, and even though I am the sole developer, sometimes I'd lose track of what I had. I also struggled with consistency, I am not always the most regular guy when it comes to naming conventions, use of query parameters, headers, and other common illnesses you see in API design across the space. Another thing that was getting out of control, was my backend database.
It was my database. I could do whatever I wanted. I just added tables, columns, and new databases as I saw fit--again, all without much consistency. I was growing a pretty large legacy code base, which was API driven, but API does not always equal better. You can just as easily build a monster with APIs, as you can with historical approaches to API design. Additionally, my architecture ran on a single Amazon RDS database instance, with a single Amazon EC2 Linux instance, serving up all the APIs.
If one API failed, all APIs failed. If I was running a large job against a single API, say creating screenshots, compressing images, or other CPU intensive processes, my other APIs suffered. Also if I rebooted the server, everything went dark. Which as a independent operator, is always very tempting.
I loved my evolving API stack, it did what I needed, but was increasingly looking like many of the legacy systems I had managed in the past, only this time it was all accessible through a single API stack. If my APIs did what I needed, this wasn't a problem, but in my back of my head I knew eventually it would catch up with me. For now I ignored it, and moved forward comfortably numb (Pink Floyd).
Along the way I also discovered new tools for helping me manage my APIs, specifically my API documentation, by using Swagger. If I crafted a machine readable JSON definition for my APIs using Swagger, I could automatically generate interactive documentation for my APIs, that was always up to date.
Quickly I found Swagger to be much more than just a machine readable API definition format, that just can be used to generate interactive documentation, or the generation of client side code libraries. While it still fully isn't realized, I saw that if I took an API design first approach, using Swagger, that it slowly was becoming a central truth throughout my API life cycle. This was just the beginning of a new world of API design, deployment, and management for me.
Along the way in 2014, Steve Willmott, the CEO of 3Scale and I developed a machine readable API discover format, which we called APIs.json. This new JSON format, was intended to provide a machine readable index of all APIs that exist within a single website domain.
Using APIs.json I could provide essential meta data about the domain an API operates in, as well as for each API endpoint available, like name, description, tags, and critical links to aspects of operations like the portal landing page, API documentation, code libraries, pricing, or terms of service. For each API I hand crafted an APIs.json, allowing me to publish a machine readable index of APIs that exist within my domain.
APIs.json immediately provided me with a very valuable index of my own APIs. Then I got thinking, what if I also indexed the public APIs that I depended on as well? So, I put around 50 public APIs to use, and nowhere did I have a comprehensive list of these APIs, especially not all the meta data for API operations including documentation, code libraries, pricing and terms of service, let alone a machine readable definition of the surface area using Swagger.
I now had three separate APIs.json files, one for my own internal APIs, one for my own APIs I offer up publicly, and a third for the distributed public APIs that I depend on for my operations. I didn't just have a single index of my APIs, I had essentially mapped out all of my stack, providing me with a single location I could go to find all my APIs.
This isn't just about providing the public with a discovery solution for my APIs, it is also about me understanding my entire surface area. Something that honestly, showed me what a mess much of my infrastructure was, but regardless, at least now it was mapped out and known, for good or bad. For the first time, I felt like there was potential for getting my house in order--then Docker happened.
The new containerization solution provided me a new way to look at my architecture. Rather than one big MySQL instance, and a support Linux / PHP / Apache server instance, I could deploy a single Docker instance, with many little LAMP nodes, each with the basic configuration I needed to run my APIs. Rather than having all my APIs served up via one server, I was able to essentially break up each API, and put into its own, independent container.
Sure, my APIs still ran on a single AWS EC2 instance, but now they each ran in individual Docker containers which could be easily fired up in separate instances, running in any infrastructure, something I would test out later. For now, I was happy knowing that I could slowly separate out each of my APIs, allowing them to act independently of each other, with hopes that a single API could fail, without bringing the entire stack to its needs. Conversely, each API could be independently scaled to meet my specific needs of just that API, without be forced to scale all my APIs.
At the same moment, another thought process was evolving, something that really wasn't much different than what I could already do, but provided me a strong incentive for rethinking how I approached my architecture, something that was not just complemented by, but also facilitated by Docker containerization--Microservices.
Microservices is more philosophy, than a concrete technology like Docker, but it provides a healthy basis for thinking about how you design, deploy, and manage your APIs. With this in mind I set forth crafting my own definition, of just exactly what micro meant to me, something that is proving to be a very personal thing, evolving from individual to individual, and company to company.
minimal on disk
minimal time to rebuild
minimal time to throw away
A microservice way of thinking, coupled with Docker enabled containerization, has allowed me to rethink how I design, and deploy my APIs. While I'm not fully subscribing to popular opinions around what is a microservice, it is allowing me to rethink many of my own architectural patterns. However this new way of thinking, came with some shortcomings, around how I uniquely identify API resources, as well as discover resources, but luckily I already had been working on solutions in these areas.
All of my APIs were already defined using Swagger, which essentially provided me with a fingerprint of each API, that I can use to uniquely identify each API, as well as quantify the entire surface area, such as how many endpoints, parameters, and details about underlying data model, and message formats. I now had, my solution for API identification, but what about API discovery?
As I contemplated untangling the legacy API mess I had created for myself, I was concerned that if I reduced the size of each API, spawning new APIs, I would eventually have over 500 individual APIs, potentially creating even more of a nightmare for myself. Luckily I had already started hanging these APIs onto an APIs.json file, something I can continue to replicate into smaller, more isolated groups, while also linking up using the APIs.json include collection. I would no longer have just three APIs.json files, I would have many nodes of APIs, indexed using APIs.json, with a master APIs.json as the doorway to my increasingly decoupled world.
As I got to work redefining my API stack, I couldn't disrupt my current set of APIs I have deployed to API Evangelist, so I decided to publish this core set of tools under the KinLane.com domain. It actually makes sense, because many of the APIs are not directly related to API Evangelist, spanning larger work that I do. My brand is bigger than just API Evangelist, it is just one node underneath the Kin Lane brand.
After completing the first wave of converting my legacy API stack, using my new containerized, micro design approach, each complete with its own Swagger, and APIs.json, I end up with 25 separate APIs, with over 250 endpoints. This provided me with a next generation blueprint of my API stack, that I could easy follow, add to, and evolve over time.
As I carved off each API, defining the next generation I worked to keep as small, and self-contained as possible. My link API, became link, curation, and linkrot APIs, with additional supporting services for screen capture, pulling content, and other utilities as their own, individual endpoints.
My images API is getting similar treatment and carving off many of the support utilities as stand alone features so that I can use them separately. While I may use resize, compress, and other utilities in conjunction with my core images API, many times this won't be the case, especially if I open up to the outside world.
This process of reducing the scope of my APIs isn't just about size, it is also about isolation of services, keeping my APIs doing one thing, and doing it well. Allowing me to deploy, scale, and migrate my services exactly as I see fit. While my definition of a microservice may not be everyone else, it helps provide a guide for me as I'm evolving legacy APIs, as well as defining and evolving new ones. With my new stack, I can now begin to think about how I deploy my cloud infrastructure a little differently.
AWS = Containers
Google = Containers
Microsoft = Containers
I can easily fire up a Docker stack in both AWS and Google, both using the exact same LAMP stack configuration, and pull in each of my API definitions, allowing me to easily deploy, and migrate between cloud providers. To accomplish this, it isn't as if I can drag one container from AWS to Google, I rely on Git, or more specifically Github to help me achieve this.
Each API lives as its own Github repository, with every aspect of its existence present either within the private master branch of the repository, and some of it available in a public gh-pages branch. Using Github I store the server side PHP code I will use when deploying each Docker container, but I also story the data model, data backups in the private, secure master branch of the repository.
At the center of reach Github repository is the APIs.json for the API, providing of index of not just meta data about the API, but its Swagger fingerprint, server code, client code, and other essential elements of operations. When I fire up a new Docker container, I reference where to find its server side API code, which also simultaneously provides it with the Github organization and repo where it can find its APIs.json index.
This approach allows me to easily fire up containers in any cloud provider that supports Docker, which is pretty much all of them in 2015. The best part of this, is I can also deploy locally in my own home or office, using a local server, or even via Raspberry Pi (work in progress).
Another important aspect of this evolution of my API stack, is that as I've decoupled my APIs, allowing them to reborn as a decoupled set of independent portable APIs, forever changing how I design and deploy my APIs, my API management is also undergoing the same treatment. You see, if my APIs can migrate and move, so does the API management layer that I use to orchestrate them.
To begin with, I need the basic ability to mange my API consumers. I need the basic controls for managing access to all the APIs contained within a stack, alongside all my APIs, as an equal resource.
With the ability to add, read, update, and delete users on my platform, I also need to the ability to manage each of the accounts. This way I can directly access each account within my API stack, but more importantly my API consumers can also manage their own accounts using the same APIs.
Account Set Credit Card
Account Delete Credit Card
Invoice By Account
Invoice Line Item List
Invoice Payment Transaction List
A key part of my API management infrastructure is the concept of service composition. Using 3Scale I have created multiple tiers of access to APIs, allowing for a public free, entry level layer, as well as higher levels of paid, partner, and internal access. I need the details of these service plans I've created, including their features to move with my API infrastructure.
Service Plan List
Service Plan Feature List
Service Plan Set To Default
As each of my API consumers register, and manage their API access, the next level of API access is handled through the application level management layer. 3Scale gives me the tools to do this, which I've extended as individual API endpoints.
Application Plan List (per isolated service)
Application Change Plan
Application Key List
Application Key Create
Application Key Delete
Application Usage by Metric
There are plenty of other services that 3Scale API infrastructure gives me, but this represent the v1 API stack I need to help orchestrate this new, containerized, potentially distributed API stack I've setup for myself.
Eventually I'd like to see entire 3SCale infrastructure decoupled, similar to my stack, giving me, and my consumers to access all API management features, right alongside the other API resources I am providing--giving me a pretty complete stack. Now I'm ready for the higher level evolution of my APIs, between my two organizations.
With my new approach I can easily establish a new Github organization for API Evangelist, pick exactly the APIs I want, fork them to my new organization, deploy a new container server on AWS, Google, or wherever I need it, and have a brand new stack. I will use the same 3Scale API infrastructure account to manage both my Kin Lane and API Evangelist API stack, so other than deploying the individual APIs, there is no configuration needed, my API management follows my APIs.
All along the way, I've been on-boarding new API consumers, as well as migrated the users who have historically used my API Evangelist stack of APIs. Some of these users have shown interest in being able to scale their usage of some of my APIs. These tend to not be the core blog, or company APIs I've developed as part of my monitoring of the API space, they are other more utility APIs.
While many of my users are perfectly happy using these utility APIs via my API stack, some of them have expressed interest in being able to scale the API to meet the demands of their operations. Just as I will scale up Docker containers for my own needs, my customers are hoping to do this same, or possibly even deploy my API designs, in their own infrastructure -- opening up a whole new level of API monetization for me.
I can even take this to the next level, and deploy entire API stacks for customers, their infrastructure or mine. Providing all APIs, via an a la carte menu, giving an entire new dimension to my API deployment strategy. I can deploy API stacks, exactly as I need them, for my own needs, or for my customers--using the same API infrastructure. The best part of this, is it isn't just about deploying API infrastructure of my needs, my customers can get in on the API provider game.
My customers can start by consuming APIs, then evolve to deploying them within their own infrastructure, and when ready, they can open up these APIs to their own customers, and the way my 3Scale infrastructure works, I can switch them over to their own API management account at any time, with no changes needed. I just create a new top level account, change the master API management key from mine to theirs, replicate service levels, and their customers can start signing up, and consuming their APIs.
This approach to API deployment and management opens up not just a new way of monetizing APIs at a wholesale level, it opens up the power of being an API provider to my API community. Another interesting aspect of this approach is because each API comes complete with a machine readable Swagger API definition, and APIs.json file index, each API, and each API stack is easily discoverable, and open to the delivery of API focused services, from 3rd party providers.
In 2015, more API service providers are supporting Swagger, and other common API definition formats like API Blueprint, as a way to on board, configure, automate, and access valuable API services in the cloud, or even on-premise. While to many Swagger is a way to generate interactive documentation or client code libraries, it is being used by a growing number of companies for much, much more.
Using API Science I can import the Swagger files for my APIs, and generate the monitors I need to keep an eye on API operations. Once API monitors are setup, I receive regular emails per my account, and can embed dashboard tools for helping me visualize my platform availability.
Similar to API Science, using Runscope, I can import Swagger definitions into my account, and automate the setup of tests, which I can run as part of my regular platform monitoring and testing--saving serious time in how I monitor my APIs.
Postman allows me to import my Swagger definitions, and create ready to go API calls, where I can see easily understand the request and responses of all my API calls, collaborate with other API consumers on my team. Postman allows me to easily work with my APIs, without a UI, and with some of my APis, is how I engage -- never quite needing a full UI.
SmartBear also allows you to import Swagger, and allow you to generate mock and virtualized APis, run testing, monitoring, performance, and security tests against your APIs. Using Swagger I can quickly configure a number of vital services I need to operate a healthy platform.
This is something that isn't just machine readable, and can be translated into UI, browser, IDE, and other more human aspects of API integration and engagement.
While none of these service providers listed, currently support the importing of APIs.json, only Swagger -- eventually they will. APIs.json will provide a single entry point that these API service providers can import not just one, but many APIs, and configure valuable services throughout an APIs life cycle. An example of this in action can be found using the APIs.io API search engine.
APIs.io provides an APIs.json import, allowing you to index all of your APis, and the supporting elements of API operations. Once indexed, APIs will be available to the public, via the open source API search engine. This is just the beginning of APIs.json enabled search, other providers are getting in the game as well.
With the next release of the WSO2 API management platform, API providers will be able to organize APIs, and export them as APIs.json collections, opening up for either listing in the public APis.io search engine, but also opening up for the deployment of private, internal API search engines using either APIs.io, which is open source, or the deployment of a custom solution. I have also begun delivering tooling that employs APIs.json, for delivering vital services along the API life cycle.
Using the Swagger Codegen project, which is an open source solution for generating client side libraries, I deployed an API that accepts any valid APIs.json file, and return seven separate client side libraries, from the Swagger definition. While these are by no means a perfect client solution, it provides a nice start for some developers to get going, eliminating some of the more redundant aspects of API integration.
An advantage of using APIs.json to index API operations, and on board API service providers, is that it can provide access to one, or many API definitions, as well as other aspects of API operations, like documentation, code, pricing. While many of these aspects are not machine readable, once an API has been imported into a service providers platform, these other elements can provide important references that service providers can use to determine next steps. I'd call this level of API service delivery a more inbound approach, but APIs.json also brings outbound effects to the table.
Once a service has been rendered, service providers can also provide other elements that can be hung on an APIs.json, just as with Swagger, and other elements of operations. I am already including references to Postman collections in my APIs.json files, and have begun adding API Science statistics and visualizations as part of regular API indexing details.
This provides both an inbound opportunity for the delivery of new services, but also the publishing of essential outcomes from those services being delivered, that can benefit API service providers. These details can also provide important elements for other API service providers to use. Imagine if API providers like API Change Log could pull in API Science and Runscope details to enhance and augment their own services, providing more details about the availability of services, and the changes that have occurred--then API Change Log can publish their own results, further enriching the index for each API.
This opens up a community effect, when it comes to delivering vital services throughout an API life cycle. As an API provider I do not have to do everything myself. I can design, deploy, and management my APIs, then allow service providers to index my APIs.json, and demonstrate the value their services deliver, and then choose which services I need--making the API life cycle a more community driven affair, opening the door up to a wealth of potential APIs.json driven services.
As an API provider I do not have time to do everything, and many of the API service providers out there need valuable meta data about API operations to offer, enhance, and evolve their service offerings. I can take these services and improve my API operations, no matter where these APIs exist, publicly or privately--all I need is an APIs.json file as the seed. All of this moves APIs.json well beyond just discovery, and like Swagger provides a central truth for not just defining the API, but defining entire API operations.
We still have a long way to go, before all aspects of API operations are machine readable, but APIs.json can be used today for indexing of the technical side of operations, and with the help of API definition formats like Swagger and API Blueprint, these elements of operations, can be read, imported, and acted upon programmatically both other systems. Since all of my APIs, now live in their own Github repositories, with APIs.json as a central index, pull requests can be made adding elements to the index, potentially by 3rd parties--further making the API life cycle a community effort.
Swagger and API Blueprint have solidly moved beyond an interesting new advancement of the API space, they are being used by start ups, small businesses, enterprise groups, and even government agencies to define the technical side of API operations. When bundled with APIs.json, we can build a machine readable bridge to the other aspects of API operations, like the business side.
Swagger and API Blueprint both started by providing interactive API documentation, an essential building block for on-boarding API consumers, which is more about API business, then the technical details. These API definition are now moving into other business areas like delivering how-two materials, potentially driving pricing, dashboard elements, or other embeddable elements that can be used across API operations.
With the introduction of machine readable API licensing formats like API Commons, the legal, or as I call it, the political aspects of API operations comes into play as well. I am using APIs.json to connect the service composition, tiers, and rate limits, to the technical, and business details that are indexed in an APIs.json file. This is just the beginning, soon other aspects like terms of service, patents, and deprecation policies can also be included, further expanding the APIs.json index of each API, and the collections of APIs.
This is a story of my own infrastructure, but is derived from the research I do across the API space. I work hard to not be just an API pundit, and actually practice what I preach. I'm slowly moving from the academic version of this, to a fully functional version, that is my own architecture that I use to run the API Evangelist network.
I gave a version of this talk last night at 3Scale offices, and will be giving various versions of it at APIDays Mediterranea this coming week in Barcelona, and again at Gluecon in Colorado, on May 20th. Over the summer I will continue to evolve my architecture for both Kin Lane and API Evangelist, and evolve a new set of stories, and talks that I can give, helping me share my approach.
I will publish the slide deck of each talk that I give at APIDays and Gluecon on my Github repo when ready, for everyone to reference. Thanks for paying attention.
Disclosure: 3Scale and WSO2 are both API Evangelist Partners
This is something I talk about often, but it has been a while since I’ve done a story dedicated to it, so I wanted to make sure and take a fresh look at what I’d consider to be a minimum viable footprint for any API—I don’t care if it is public or not. This definition has grown out of five years of monitoring the approach taken by leading API providers, and is also baked into what I’d consider to be a minimum viable APIs.json definition—which provides an important index for API operations.
What do I want to see when I visit a developer area? More importantly, what does your average developer, or API consumer need when they land on your API portal? Let’s look at the basics:
- Portal - A clean, easily accessible, prominently place portal landing page. This shouldn’t be buried with your help section, it should be located at developer.[yourdomain].com. '
- Description - As soon as developers land, they need a simple, concise explanation of what an API does. Actually describe what the API does, not just that it provides programmatic access to your products and services.
- Getting Started - Give everyone, even non-developers as place to start, helping us understand what is needed to get started with API integration, from signing up for an account to where do I find support.
- Documentation - Deliver, simple, clean, and up to date documentation, preferably of the interactive kind with a Swagger or API Blueprint behind.
- Authentication - Help developers understand authentication. There are a handful of common approaches from BasicAuth, and API keys, to oAuth--provide a simple overview of how authentication is handled.
- Self-Service Registration - How do I sign up? Give me a link to a self-service account signup, and make it as easy for me to create my new account, and go from idea to live integration as fast as possible—don’t make me wait for initial approval, that can come later.
- Code - Provide consumers with code, whether they are samples, libraries, or full blown Software Development Kits (SDKs) and Platform Development Kits (PDK). Make sure as many possible languages are provided, not just the languages you use.
- Direct Support - Give API consumers a way to reach you via email, ticketing system, chat, or good ol fashioned phone.
- Self-Service Support - Provide self service support options via FAQ, Knowledgbases, Forums and other proven ways developers can find the answers they need, when they need.
- Communication - Setup the proper communication channels like a blog and PR section, as well as a healthy social presence on Twitter, LinkedIn, Facebook, or other places your audience already exists.
- Pricing - Even if an API is free, provide an overview of how the platform makes it money, and generates value — enough to keep it up and running, so I know, as an API consumer I can depend on. Let me know all pricing levels, and provide insight into other partner opportunities.
- Rate Limits - Provide a clear overview of how the platform is rate limited, even if they are to protect service availability, let consumers know what to expect.
- Roadmap - Give consumers a look at what is coming in the future, keeping it as a simple, forecast of the short and long term future of an API.
- Change Log - Provide us consumers with a list of all changes that have been made to platform operations, don’t limit to just API changes, and include significant roadmap milestones that have been reached.
- Status - Offer a real-time status dashboard of the platform, with a history view of platform status, that consumers can use as a decision making tool, as well as get current platform status.
- Github - Use Github for all aspects of your API platform operations from hosting code, to providing support via issues, to actually hosting your entire developer portal on Github Pages.
- Terms of Service - Provide, easy to find, and understand terms of service for platform operations, helping API consumers understand what they are in for.
- APIs.json - A machine readable index of any API driven platform, providing a single place to find not just the API endpoints, but also all of the essential building blocks of API operations listed above.
This is my shortlist, of common building blocks that every API platform should have. Part of the reason I’m publishing this, is to provide a fresh look at what I’d consider to be the minimum viable footprint, but I’m also working to get my own API portal for my new master API stack up to snuff, meeting my own criteria. Without a checklist, I forget what some of the essential building blocks are—you know the cobbler's kids have the worst shoes and all.
After I’m done making sure my own API portal meets this criteria, something I can programmatically measure when done, via the APIs.json file, I will provide a self-service evaluation tool that anyone can use to measure whether or not their own portal meets my minimum viable API footprint definition.
In the coming months I’m doing some deep profiling of the API space, so you are going to see me reviewing the approach of more API providers in the space. My goal with API reviews is not just to showcase the company or service involved, but review the overall approach of the provider. You can read more about my review process on API Evangelist, to better understand my objectives.
The review in the queue today is from Respoke, a web communications platform. When you land on the Respoke website, you see all the signs of a modern platform, starting with the simple single page app style website, but more importantly, they immediately tell me what they do, in a simple, easy to understand way:
Add live voice, video, text, and data features to your website or app
You wouldn't believe how hard this is for some of the APIs I review. If I have to spend more than 5 seconds trying to understand what you do, you've failed, and Respoke nails it, by both providing simple text, as well as visuals that help me understand that they are a web communications platform.
To continue understanding what Respoke does, let’s take a stroll through all of the areas I focus on during any API review.
Actual API Endpoints
I always start with the actual API endpoints when reviewing an platform. Respoke provides a basic set of endpoints, mostly for authentication of communications via the platform, but also managing roles, groups, etc. You can tell the API is new, and it doesn't have the telltale signs of a fully mature API that has been used for a while, but I know with Respoke, they are just getting going. I’m not a fan of using POST, and relying on request data being passed through the body as JSON. I like simple parameter based design, with sensible usage of your verbs. I feel this approach makes an API more hackable by users, even non-developers who only know enough to be dangerous. It is not a show stopper, just a personal opinion of mine.
Respoke provides a simple, frictionless on-boarding experience, easing you in with a getting started that walks you through the platform, and the self-service registration flow you need to get started with the service. At first glance I’d say they can do a little better job of explaining the authentication. I haven't actually hacked on it yet, but at first glance it seems overly complex, or maybe just could be explained better. Another thing I'd like to see is a simple FAQ that I can scan to be able to see many of the common questions that get asked, and educate myself about Respoke in one page.
Like the API endpoints themselves, the documentation is simple. I’d like to see a Swagger or API Blueprint specs for the API, and some interactive documentation to go with it—in my opinion this is default for all platform documentation in 2014. I suspect as the API matures, so will the platform documentation.
Authentication & Security
As I mentioned above, I think the authentication strategy is a little confusing when you are on-boarding. I’m guessing I don’t fully understand everything about the security and authentication of web communications via Respoke, but I also suspect they can put a little more work into help new users understand how it works, get started with basic authentication, and then when you are ready, learn more about the advanced security features around platform communications.
Code For Integration
Direct and Indirect Support
When it comes to support resources, Respoke has the minimum viable building blocks need, providing indirect support via a forum, and direct support via email. This is definitely the baseline for any provider, but I would encourage the addition of a support ticketing system as they pick up momentum, and definitely start keep an eye on Stack Overflow, and engaging developers on their own turf.
Communication With Community
The Twitter, Facebook, and Google+ social accounts are front and center for Respoke, and they appear to be active—nice. However the lack of a blog always makes me sad. I know that many providers don’t feel they have the resources to publish a blog, and keep active, but for me this is one of the most important signals an API platform can provide. When it comes to communicating with your API community, I’d say the blog is number one, and after that Twitter and Github are number two and three. Respoke has 2 out of 3 when it comes to the essential communication building blocks.
As I made my way around the Respoke platform, I was happy to see there was a change log keeping developers up to date with what has happened to the platform. I would also consider providing some sort of roadmap, providing the other end of the platform update coin. Since Respoke uses WebRTC as core technology, it seems there should be some element of updates regarding browser support, and updates to the WebRTC format. I would also consider adding a status dashboard to keep Respoke users informed about the stability of the overall platform.
One of things I judge APIs on is their pricing, and whether or not the pricing page is front and center, and Respokes is. They provide you with a free tier for playing around, and sensible pricing tiers for you to grow into, with unit based pricing per minute and connection. How a platform wears its business model tells a lot about the underlying services, and Respoke is straight up about all of it. Kudos!
When it came to supporting resources for the Respoke platform, there really were no case-studies, slides from events, how to guides, and other vital resources to help developers through all stages of development. This is one area I’m pretty lenient in early on, because I understand that many APIs are just getting going, and it will take time for supporting resources to be developed properly. My goal is to just make sure the platform is aware of the lack of general resources, and make sure it gets into the roadmap ASAP.
Research & Development
I always like to see some sort of forward leaning research & development areas of an API, such as an idea showcase, or labs environment. I think Respoke touches on this with their starter projects, that don’t just provide valuable code, but also ideas for how the service can be integrated with, but I think they could take this further. Respoke needs to bring their experience with web communications, and WebRTC to the table and providing leading examples of what is next for web communications. There are many proven ways platforms demonstrate where their platform, and supporting technologies are headed, and I’d like to see more of this within the Respoke community, I know the knowledge and talent exists, they just need to showcase on site.
After I signed up for my Respoke account, logged in, and setup my first application, I noticed that Respoke had a dev mode for developer to take advantage of when integrating into their applications. I’m a big fan of sandbox or development environments, allowing developers to build in a safer, more comfortable environment, but then being able to easily flip the production switch in a self-service way, when they are ready. I’d like to see more integration resources available to developers, that help with development, QA, and production issues, and even provide monitoring, testing, and other common integration building blocks we are seeing from providers like APITools and Runscope.
A simple, robust developer portal is a must these days for any providers, and Respoke provide a pretty standard account management solution, giving you control over settings, the ability managing individual applications, as well as your pricing and payment history. I’d like to see more analytics, and other integration resources as I mentioned above, here in my Respoke developer area, but overall it provides the minimum viable portal developers need to be successful when integrating with an API--anticipating it will only get better.
Balance and Consistency
Across the technical, business, and political building blocks of an API, I’m always looking for balance and consistency. This might be in how they craft their actual API endpoints, but could also be in API documentation, code samples, or even storytelling on the blog. I’d say that they are consistent in their API design, but as I mentioned above, I’m not a fan of making requests a POST, using the body to pass values. While they are consistent in this use, I don’t think its consistent with modern, web APIs developers are used to seeing. I think Respoke will achieve more balance, and consistency in their API design, as well as content and other resources available on the platform over time—they are just getting going, and it always takes a while to get firm footing in this area.
Is It An Open API?
I’m always hesitant to truly call an API “open”, until I peel back the layers of the onion, and make an assessment myself—just because an API is publicly available, doesn’t mean it is open. After doing this, I can confidently say that Respoke is an open API. It is publicly available, self-service, with a straightforward business model, including sensible TOS. Only time will tell if Respoke continues to deliver their services in an open way, but the way they've constructed their platform so far, tells me they will.
Like the resources area, there is pretty much no sign of life to report on when it comes to evangelism of the Respoke platform. However, I do know they have hired an evangelist, I have a call scheduled with him, and I’m confident this is another area we will start seeing activity. You can always tell when a platform as an active evangelist because the social networks are active, Stack Overflow and Quora are engaged, there are a steady stream of commits on Github, regular flow of stories on the blog, and lots of exhaust from conferences and hackathons they are at. I'm guessing the next time I take a look at the Respoke platform, this will be different.
That concludes my review of the Respoke API, across these 18 areas. I’d give them a solid B for their efforts. The platform has some maturing to do, in the overall design of the API, and the supporting building blocks, but this is standard operating procedure. Respoke is just getting going, and are kicking butt in most of the essential areas. My only red flag is the lack of blog, but this is just one of my pet peeves. I need the storytelling heartbeat of a blog to help me get to know providers in real time. I do not have time to speak to all providers on a regular basis, and an active blog provides an asynchronous way to track on thousands of companies, and their API resources.
Hopefully my review provides value for Respoke, but also the wider API space, and helps API providers understand how they can better craft their own API strategies. Respoke is something I’d share as a blueprint for how you can deliver a communication platform using APIs, and I will be keeping an eye on them, updating my definition of what they do as they evolve. Look for more updates as I continue to weave Respoke into my API Stack, as well as any of my other API research.
In my work everyday as the API Evangelist, I get to have some very interesting conversations, with a wide variety of folks, about how they are using APIs, as well as brainstorming other ways they can approach their API strategy allowing them to be more effective. One of the things that keep me going in this space is this diversity. One day I’m looking at Developer.Trade.Gov for the Department of Commerce, the next talking to WordPress about APIs for 60 million websites, and then I’m talking with the The Church of Jesus Christ of Latter-day Saints about the Family Search API, which is actively gathering, preserving, and sharing genealogical records from around the world.
I’m so lucky I get to speak with all of these folks about the benefits, and perils of APIs, helping them think through their approach to opening up their valuable resources using APIs. The process is nourishing for me because I get to speak to such a diverse number of implementations, push my understanding of what is possible with APIs, while also sharpening my critical eye, understanding of where APIs can help, or where they can possibly go wrong. Personally, I find a couple of things very intriguing about the Family Search API story:
- Mapping the worlds genealogical history using a publicly available API — Going Big!!
- Potential from participation by not just big partners, but the long tail of genealogical geeks
- Transparency, openness, and collaboration shining through as the solution beyond just the technology
- The mission driven focus of the API overlapping with my obsession for API evangelism intrigues and scares me
- Have existing developer area, APIs, and seemingly necessary building blocks but failed to achieve a platform level
I’m open to talking with anyone about their startup, SMB, enterprise, organizational, institutional, or government API, always leaving open a 15 minute slot to hear a good story, which turned into more than an hour of discussion with the Family Search team. See, Family Search already has an API, they have the technology in order, and they even have many of the essential business building blocks as well, but where they are falling short is when it comes to dialing in both the business and politics of their developer ecosystem to discover the right balance that will help them truly become a platform—which is my specialty. ;-)
This brings us to the million dollar question: How does one become a platform?
All of this makes Family Search an interesting API story. The scope of the API, and to take something this big to the next level, Family Search has to become a platform, and not a superficial “platform” where they are just catering to three partners, but nourishing a vibrant long tail ecosystem of website, web application, single page application, mobile applications, and widget developers. Family Search is at an important reflection point, they have all the parts and pieces of a platform, they just have to figure out exactly what changes need to be made to open up, and take things to the next level.
First, let’s quantify the company, what is FamilySearch? “ For over 100 years, FamilySearch has been actively gathering, preserving, and sharing genealogical records worldwide”, believing that “learning about our ancestors helps us better understand who we are—creating a family bond, linking the present to the past, and building a bridge to the future”.
FamilySearch is 1.2 billion total records, with 108 million completed in 2014 so far, with 24 million awaiting, as well as 386 active genealogical projects going on. Family Search provides the ability to manage photos, stories, documents, people, and albums—allowing people to be organized into a tree, knowing the branch everyone belongs to in the global family tree.
FamilySearch, started out as the Genealogical Society of Utah, which was founded in 1894, and dedicate preserving the records of the family of mankind, looking to "help people connect with their ancestors through easy access to historical records”. FamilySearch is a mission-driven, non-profit organization, ran by the The Church of Jesus Christ of Latter-day Saints. All of this comes together to define an entity, that possesses an image that will appeal to some, while leave concern for others—making for a pretty unique formula for an API driven platform, that doesn’t quite have a model anywhere else.
FamilySearch consider what they deliver as as a set of record custodian services:
- Image Capture - Obtaining a preservation quality image is often the most costly and time-consuming step for records custodians. Microfilm has been the standard, but digital is emerging. Whether you opt to do it yourself or use one of our worldwide camera teams, we can help.
- Online Indexing - Once an image is digitized, key data needs to be transcribed in order to produce a searchable index that patrons around the world can access. Our online indexing application harnesses volunteers from around the world to quickly and accurately create indexes.
- Digital Conversion - For those records custodians who already have a substantial collection of microfilm, we can help digitize those images and even provide digital image storage.
- Online Access - Whether your goal is to make your records freely available to the public or to help supplement your budget needs, we can help you get your records online. To minimize your costs and increase access for your users, we can host your indexes and records on FamilySearch.org, or we can provide tools and expertise that enable you to create your own hosted access.
- Preservation - Preservation copies of microfilm, microfiche, and digital records from over 100 countries and spanning hundreds of years are safely stored in the Granite Mountain Records Vault—a long-term storage facility designed for preservation.
FamilySearch provides a proven set of services that users can take advantage of via a web applications, as well as iPhone and Android mobile apps, resulting in the online community they have built today. FamilySearch also goes beyond their basic web and mobile application services, and is elevated to software as a service (SaaS) level by having a pretty robust developer center and API stack.
FamilySearch provides the required first impression when you land in the FamilySearch developer center, quickly explaining what you can do with the API, "FamilySearch offers developers a way to integrate web, desktop, and mobile apps with its collaborative Family Tree and vast digital archive of records”, and immediately provides you with a getting started guide, and other supporting tutorials.
FamilySearch provides access to over 100 API resources in the twenty separate groups: Authorities, Change History, Discovery, Discussions, Memories, Notes, Ordinances, Parents and Children, Pedigree, Person, Places, Records, Search and Match, Source Box, Sources, Spouses, User, Utilities, Vocabularies, connecting you to the core FamilySearch genealogical engine.
The FamilySearch developer area provides all the common, and even some forward leaning technical building blocks:
To support developers, FamilySearch provides a fairly standard support setup:
To augment support efforts there are also some other interesting building blocks:
Setting the stage for FamilySearch evolving to being a platform, they even posses some necessary partner level building blocks:
There is even an application gallery showcasing what web, mac & windows desktop, and mobile applications developers have built. FamilySearch even encourages developers to “donate your software skills by participating in community projects and collaborating through the FamilySearch Developer Network”.
Many of the ingredients of a platform exist within the current FamilySearch developer hub, at least the technical elements, and some of the common business, and political building blocks of a platform, but what is missing? This is what makes FamilySearch a compelling story, because it emphasizes one of the core elements of API Evangelist—that all of this API stuff only works when the right blend of technical, business, and politics exists.
Establishing A Rich Partnership Environment
FamilySearch has some strong partnerships, that have helped establish FamilySearch as the genealogy service it is today. FamilySearch knows they wouldn’t exist without the partnerships they’ve established, but how do you take it to the next and grow to a much larger, organic API driven ecosystem where a long tail of genealogy businesses, professionals, and enthusiasts can build on, and contribute to, the FamilySearch platform.
FamilySearch knows the time has come to make a shift to being an open platform, but is not entirely sure what needs to happen to actually stimulate not just the core FamilySearch partners, but also establish a vibrant long tail of developers. A developer portal is not just a place where geeky coders come to find what they need, it is a hub where business development occurs at all levels, in both synchronous, and asynchronous ways, in a 24/7 global environment.
FamilySearch acknowledge they have some issues when it comes investing in API driven partnerships:
- “Platform” means their core set of large partners
- Not treating all partners like first class citizens
- Competing with some of their partners
- Don’t use their own API, creating a gap in perspective
FamilySearch knows if they can work out the right configuration, they can evolve FamilySearch from a digital genealogical web and mobile service to a genealogical platform. If they do this they can scale beyond what they’ve been able to do with a core set of partners, and crowdsource the mapping of the global family tree, allowing individuals to map their own family trees, while also contributing to the larger global tree. With a proper API driven platform this process doesn’t have to occur via the FamiliySearch website and mobile app, it can happen in any web, desktop, or mobile application anywhere.
FamilySearch already has a pretty solid development team taking care of the tech of the FamilySearch API, and they have 20 people working internally to support partners. They have a handle on the tech of their API, they just need to get a handle on the business and politics of their API, and invest in the resources that needed to help scale the FamilySearch API being just a developer area, to being a growing genealogical developer community, to a full blow ecosystem that span not just the FamilySearch developer portal, but thousands of other sites and applications around the globe.
A Good Dose Of API Evangelism To Shift Culture A Bit
A healthy API evangelism strategy brings together a mix of business, marketing, sales and technology disciplines into a new approach to doing business for FamilySearch, something that if done right, can open up FamilySearch to outside ideas, and with the right framework manage to allow the platform to move beyond just certification, and partnering to also investment, and acquisition of data, content, talent, applications, and partners via the FamilySearch developer platform.
Think of evangelism as the grease in the gears of the platform allowing it to grow, expand, and handle a larger volume, of outreach, and support. API evangelism works to lubricate all aspects of platform operation.
First, lets kick off with setting some objectives for why we are doing this, what are we trying to accomplish:
- Increase Number of Records - Increase the number of overall records in the FamilySearch database, contributing the larger goals of mapping the global family tree.
- Growth in New Users - Growing the number of new users who are building on the FamilySearch API, increase the overall headcount fro the platform.
- Growth In Active Apps - Increase not just new users but the number of actual apps being built and used, not just counting people kicking the tires.
- Growth in Existing User API Usage - Increase how existing users are putting the FamilySearch APIs. Educate about new features, increase adoption.
- Brand Awareness - One of the top reasons for designing, deploying and managing an active APIs is increase awareness of the FamilySearch brand.
- What else?
What does developer engagement look like for the FamilySearch platform?
- Active User Engagement - How do we reach out to existing, active users and find out what they need, and how do we profile them and continue to understand who they are and what they need. Is there a direct line to the CRM?
- Fresh Engagement - How is FamilySearch contacting new developers who have registered weekly to see what their immediate needs are, while their registration is fresh in their minds.
- Historical Engagement - How are historical active and / or inactive developers being engaged to better understand what their needs are and would make them active or increase activity.
- Social Engagement - Is FamilySearch profiling the URL, Twitter, Facebook LinkedIn, and Github profiles, and then actively engage via these active channels?
Establish a Developer Focused Blog For Storytelling
- Projects - There are over 390 active projects on the FamilySearch platform, plus any number of active web, desktop, and mobile applications. All of this activity should be regularly profiled as part of platform evangelism. An editorial assembly line of technical projects that can feed blog stories, how-tos, samples and Github code libraries should be taking place, establishing a large volume of exhaust via the FamlySearch platform.
- Stories - FamilySearch is great at writing public, and partner facing content, but there is a need to be writing, editing and posting of stories derived from the technically focused projects, with SEO and API support by design.
- Syndication - Syndication to Tumblr, Blogger, Medium and other relevant blogging sites on regular basis with the best of the content.
Mapping Out The Geneology Landscape
- Competition Monitoring - Evaluation of regular activity of competitors via their blog, Twitter, Github and beyond.
- Alpha Players - Who are the vocal people in the genealogy space with active Twitter, blogs, and Github accounts.
- Top Apps - What are the top applications in the space, whether built on the FamilySearch platform or not, and what do they do?
- Social - Mapping the social landscape for genealogy, who is who, and who should the platform be working with.
- Keywords - Established a list of keywords to use when searching for topics at search engines, QA, forums, social bookmarking and social networks. (should already be done by marketing folks)
- Cities & Regions - Target specific markets in cities that make sense to your evangelism strategy, what are local tech meet ups, what are the local organizations, schools, and other gatherings. Who are the tech ambassadors for FamilySearch in these spaces?
Adding To Feedback Loop From Forum Operations
- Stories - Deriving of stories for blog derived from forum activity, and the actual needs of developers.
- FAQ Feed - Is this being updated regularly with stuff?
- Streams - other stream giving the platform a heartbeat?
Being Social About Platform Code and Operations With Github
- Setup Github Account - Setup FamilySearch platform developer account and bring internal development team into a team umbrella as part of.
- Github Relationships - Managing of followers, forks, downloads and other potential relationships via Github, which has grown beyond just code, and is social.
- Github Repositories - Managing of code sample Gists, official code libraries and any samples, starter kits or other code samples generated through projects.
Adding To The Feedback Loop From The Bigger FAQ Picture
- Quora - Regular trolling of Quora and responding to relevant [Client Name] or industry related questions.
- Stack Exchange - Regular trolling of Stack Exchange / Stack Overflow and responding to relevant FamilySearch or industry related questions.
- FAQ - Add questions from the bigger FAQ picture to the local FamilySearch FAQ for referencing locally.
Leverage Social Engagement And Bring In Developers Too
- Facebook - Consider setting up of new API specific Facebook company. Posting of all API evangelism activities and management of friends.
- Google Plus - Consider setting up of new API specific Google+ company. Posting of all API evangelism activities and management of friends.
- LinkedIn - Consider setting up of new API specific LinkedIn profile page who will follow developers and other relevant users for engagement. Posting of all API evangelism activities.
- Twitter - Consider setting up of new API specific Twitter account. Tweeting of all API evangelism activity, relevant industry landscape activity, discover new followers and engage with followers.
Sharing Bookmarks With the Social Space
- Hacker News - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Hacker News, to keep a fair and balanced profile, as well as network and user engagement.
- Product Hunt - Product Hunt is a place to share the latest tech creations, providing an excellent format for API providers to share details about their new API offerings.
- Reddit - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Reddit, to keep a fair and balanced profile, as well as network and user engagement.
Communicate Where The Roadmap Is Going
- Roadmap - Provide regular roadmap feedback based upon developer outreach and feedback.
- Changelog - Make sure the change log always reflects the roadmap communication or there could be backlash.
Establish A Presence At Events
- Conferences - What are the top conferences occurring that we can participate in or attend--pay attention to call for papers of relevant industry events.
- Hackathons - What hackathons are coming up in 30, 90, 120 days? Which would should be sponsored, attended, etc.
- Meetups - What are the best meetups in target cities? Are there different formats that would best meet our goals? Are there any sponsorship or speaking opportunities?
- Family History Centers - Are there local opportunities for the platform to hold training, workshops and other events at Family History Centers?
- Learning Centers - Are there local opportunities for the platform to hold training, workshops and other events at Learning Centers?
Measuring All Platform Efforts
- Activity By Group - Summary and highlights from weekly activity within the each area of API evangelism strategy.
- New Registrations - Historical and weekly accounting of new developer registrations across APis.
- Volume of Calls - Historical and weekly accounting of API calls per API.
- Number of Apps - How many applications are there.
Essential Internal Evangelism Activities
- Storytelling - Telling stories of an API isn’t just something you do externally, what stories need to be told internally to make sure an API initiative is successful.
- Conversations - Incite internal conversations about the FamilySearch platform. Hold brown bag lunches if you need to, or internal hackathons to get them involved.
- Participation - It is very healthy to include other people from across the company in API operations. How can we include people from other teams in API evangelism efforts. Bring them to events, conferences and potentially expose them to local, platform focused events.
- Reporting - Sometimes providing regular numbers and reports to key players internally can help keep operations running smooth. What reports can we produce? Make them meaningful.
All of this evangelism starts with a very external focus, which is a hallmark of API and developer evangelism efforts, but if you notice by the end we are bringing it home to the most important aspect of platform evangelism, the internal outreach. This is the number one reason APIs fail, is due to a lack of internal evangelism, educating top and mid-level management, as well as lower level staff, getting buy-in and direct hands-on involvement with the platform, and failing to justify budget costs for the resources needed to make a platform successful.
Top-Down Change At FamilySearch
The change FamilySearch is looking for already has top level management buy-in, the problem is that the vision is not in lock step sync with actual platform operations. When regular projects developed via the FamilySearch platform are regularly showcased to top level executives, and stories consistent with platform operations are told, management will echo what is actually happening via the FamilySearch. This will provide a much more ongoing, deeper message for the rest of the company, and partners around what the priorities of the platform are, making it not just a meaningless top down mandate.
An example of this in action is with the recent mandate from President Obama, that all federal agencies should go “machine readable by default”, which includes using APIs and open data outputs like JSON, instead of document formats like PDF. This top down mandate makes for a good PR soundbite, but in reality has little affect on the ground at federal agencies. In reality it has taken two years of hard work on the ground, at each agency, between agencies, and with the public to even begin to make this mandate a truth at over 20 of the federal government agencies.
Top down change is a piece of the overall platform evolution at FamilySearch, but is only a piece. Without proper bottom-up, and outside-in change, FamilySearch will never evolve beyond just being a genealogical software as a service with an interesting API. It takes much more than leadership to make a platform.
Bottom-Up Change At FamilySearch
One of the most influential aspects of APIs I have seen at companies, institutions, and agencies is the change of culture brought when APIs move beyond just a technical IT effort, and become about making resources available across an organization, and enabling people to do their job better. Without an awareness, buy-in, and in some cases evangelist conversion, a large organization will not be able to move from a service orientation to a platform way of thinking.
If a company as a whole is unaware of APIs, either at the company or organization, as well as out in the larger world with popular platforms like Twitter, Instagram, and others—it is extremely unlikely they will endorse, let alone participate in moving from being a digital service to platform. Employees need to see the benefits of a platform to their everyday job, and their involvement cannot require what they would perceive as extra work to accomplish platform related duties. FamilySearch employees need to see the benefits the platform brings to the overall mission, and play a role in this happening—even if it originates from a top-down mandate.
Top bookseller Amazon was already on the path to being a platform with their set of commerce APIs, when after a top down mandate from CEO Jeff Bezos, Amazon internalized APIs in such a way, that the entire company interacted, and exchange resources using web APIs, resulting in one of the most successful API platforms—Amazon Web Services (AWS). Bezos mandated that if an Amazon department needed to procure a resource from another department, like server or storage space from IT, it need to happen via APIs. This wasn’t a meaningless top-down mandate, it made employees life easier, and ultimately made the entire company more nimble, and agile, while also saving time and money. Without buy-in, and execution from Amazon employees, what we know as the cloud would never have occurred.
Change at large enterprises, organizations, institutions and agencies, can be expedited with the right top-down leadership, but without the right platform evangelism strategy, that includes internal stakeholders as not just targets of outreach efforts, but also inclusion in operations, it can result in sweeping, transformational changes. This type of change at a single organization can effect how an entire industry operates, similar to what we’ve seen from the ultimate API platform pioneer, Amazon.
Outside-In Change At FamilySearch
The final layer of change that needs to occur to bring FamilySearch from being just a service to a true platform, is opening up the channels to outside influence when it comes not just to platform operations, but organizational operations as well. The bar is high at FamilySearch. The quality of services, and expectation of the process, and adherence to the mission is strong, but if you are truly dedicated to providing a database of all mankind, you are going to have to let mankind in a little bit.
FamilySearch is still the keeper of knowledge, but to become a platform you have to let in the possibility that outside ideas, process, and applications can bring value to the organization, as well as to the wider genealogical community. You have to evolve beyond notions that the best ideas from inside the organization, and just from the leading partners in the space. There are opportunities for innovation and transformation in the long-tail stream, but you have to have a platform setup to encourage, participate in, and be able to identify value in the long-tail stream of an API platform.
Twitter is one of the best examples of how any platform will have to let in outside ideas, applications, companies, and individuals. Much of what we consider as Twitter today was built in the platform ecosystem from the iPhone and Android apps, to the desktop app TweetDeck, to terminology like the #hashtag. Over the last 5 years, Twitter has worked hard to find the optimal platform balance, regarding how they educate, communicate, invest, acquire, and incentives their platform ecosystem. Listening to outside ideas goes well beyond the fact that Twitter is a publicly available social platform, it is about having such a large platform of API developers, and it is impossible to let in all ideas, but through a sophisticated evangelism strategy of in-person, and online channels, in 2014 Twitter has managed to find a balance that is working well.
Having a public facing platform doesn’t mean the flood gates are open for ideas, and thoughts to just flow in, this is where service composition, and the certification and partner framework for FamilySearch will come in. Through clear, transparent partners tiers, open and transparent operations and communications, an optimal flow of outside ideas, applications, companies and individuals can be established—enabling a healthy, sustainable amount of change from the outside world.
Knowing All Of Your Platform Partners
The hallmark of any mature online platform is a well established partner ecosystem. If you’ve made the transition from service to platform, you’ve established a pretty robust approach to not just certifying, and on boarding your partners, you also have stepped it up in knowing and understanding who they are, what their needs are, and investing in them throughout the lifecycle.
First off, profile everyone who comes through the front door of the platform. If they sign up for a public API key, who are they, and where do they potentially fit into your overall strategy. Don’t be pushy, but understanding who they are and what they might be looking for, and make sure you have a track for this type of user well defined.
Next, quality, and certify as you have been doing. Make sure the process is well documented, but also transparent, allowing companies and individuals to quickly understand what it will take to certified, what the benefits are, and examples of other partners who have achieved this status. As a developer, building a genealogical mobile app, I need to know what I can expect, and have some incentive for investing in the certification process.
Keep your friends close, and your competition closer. Open the door wide for your competition to become a platform user, and potentially partner. 100+ year old technology company Johnson Controls (JCI) was concerned about what the competition might do it they opened up their building efficient data resources to the public via the Panoptix API platform, when after it was launched, they realized their competition were now their customer, and a partner in this new approach to doing business online for JCI.
When Department of Energy decides what data and other resource it makes available via Data.gov or the agencies developer program it has to deeply consider how this could affect U.S. industries. The resources the federal agency possesses can be pretty high value, and huge benefits for the private sector, but in some cases how might opening up APIs, or limiting access to APIs help or hurt the larger economy, as well as the Department of Energy developer ecosystem—there are lots of considerations when opening up API resources, that vary from industry to industry.
There are no silver bullets when it comes to API design, deployment, management, and evangelism. It takes a lot of hard work, communication, and iterating before you strike the right balance of operations, and every business sector will be different. Without knowing who your platform users are, and being able to establish a clear and transparent road for them to follow to achieve partner status, FamilySearch will never elevate to a true platform. How can you scale the trusted layers of your platform, if your partner framework isn’t well documented, open, transparent, and well executed? It just can’t be done.
Meaningful Monetization For Platform
All of this will take money to make happen. Designing, and executing on the technical, and the evangelism aspects I’m laying out will cost a lot of money, and on the consumers side, it will take money to design, develop, and manage desktop, web, and mobile applications build around the FamilySearch platform. How will both the FamilySearch platform, and its participants make ends meet?
This conversation is a hard one for startups, and established businesses, let alone when you are a non-profit, mission driven organization. Internal developers cost money, server and bandwidth are getting cheaper but still are a significant platform cost--sustaining a sale, bizdev, and evangelism also will not be cheap. It takes money to properly deliver resources via APIs, and even if the lowest tiers of access are free, at some point consumers are going to have to pay for access, resources, and advanced features.
The conversation around how do you monetize API driven resources is going on across government, from cities up to the federal government. Where the thought of charging for access to public data is unheard of. These are public assets, and they should be freely available. While this is true, think of the same situation, but when it comes to physical public assets that are owned by the government, like parks. You can freely enjoy many city, county, and federal parks, there are sometimes small fees for usage, but if you want to actually sell something in a public park, you will need to buy permits, and often share revenue with the managing agency. We have to think critically about how we fund the publishing, and refinement of publicly owned digital assets, as with physical assets there will be much debate in coming years, around what is acceptable, and what is not.
Woven into the tiers of partner access, there should always be provisions for applying costs, overhead, and even generation of a little revenue to be applied in other ways. With great power, comes great responsibility, and along with great access for FamilySearch partners, many will also be required to cover costs of compute capacity, storage costs, and other hard facts of delivering a scalable platform around any valuable digital assets, whether its privately or publicly held.
Platform monetization doesn’t end with covering the costs of platform operation. Consumers of FamilySearch APIs will need assistance in identify the best ways to cover their own costs as well. Running a successful desktop, web or mobile application will take discipline, structure, and the ability to manage overhead costs, while also being able to generate some revenue through a clear business model. As a platform, FamilySearch will have to bring to the table some monetization opportunities for consumers, providing guidance as part of the certification process regarding what are best practices for monetization, and even some direct opportunities for advertising, in-app purchases and other common approaches to application monetization and sustainment.
Without revenue greasing the gears, no service can achieve platform status. As with all other aspects of platform operations the conversation around monetization cannot be on-sided, and just about the needs of the platform providers. Pro-active steps need to be taken to ensure both the platform provider, and its consumers are being monetized in the healthiest way possible, bringing as much benefit to the overall platform community as possible.
Open & Transparent Operations & Communications
How does all of this talk of platform and evangelism actually happen? It takes a whole lot of open, transparent communication across the board. Right now the only active part of the platform is the FamilySearch Developer Google Group, beyond that you don’t see any activity that is platform specific. There are active Twitter, Facebook, Google+, and mainstream and affiliate focused blogs, but nothing that serves the platform, contributed to the feedback loop that will be necessary to take the service to the next level.
On a public platform, communications cannot all be private emails, phone calls, or face to face meetings. One of the things that allows an online service to expand to become a platform, then scale and grow into robust, vibrant, and active community is a stream of public communications, which include blogs, forums, social streams, images, and video content. These communication channels cannot all be one way, meaning they need to include forum and social conversations, as well as showcase platform activity by API consumers.
Platform communications isn’t just about getting direct messages answered, it is about public conversation so everyone shares in the answer, and public storytelling to help guide and lead the platform, that together with support via multiple channels, establishes a feedback loop, that when done right will keep growing, expanding and driving healthy growth. The transparent nature of platform feedback loops are essential to providing everything the consumers will need, while also bringing a fresh flow of ideas, and insight within the FamilySearch firewall.
Truly Shifting FamilySearch The Culture
Top-down, bottom-up, outside-in, with constantly flow of oxygen via vibrant, flowing feedback loop, and the nourishing, and sanitizing sunlight of platform transparency, where week by week, month by month someone change can occur. It won’t all be good, there are plenty of problems that arise in ecosystem operations, but all of this has the potential to slowly shift culture when done right.
One thing that shows me the team over at FamilySearch has what it takes, is when I asked if I could write this up a story, rather than just a proposal I email them, they said yes. This is a true test of whether or not an organization might have what it takes. If you are unwilling to be transparent about the problems you have currently, and the work that goes into your strategy, it is unlikely you will have what it takes to establish the amount of transparency required for a platform to be successful.
When internal staff, large external partners, and long tail genealogical app developers and enthusiasts are in sync via a FamilySearch platform driven ecosystem, I think we can consider a shift to platform has occurred for FamilySearch. The real question is how do we get there?
Executing On Evangelism
This is not a definitive proposal for executing on an API evangelism strategy, merely a blueprint for the seed that can be used to start a slow, seismic shift in how FamilySearch engages its API area, in a way that will slowly evolve it into a community, one that includes internal, partner, and public developers, and some day, with the right set of circumstances, FamilySearch could grow into robust, social, genealogical ecosystem where everyone comes to access, and participate in the mapping of mankind.
- Defining Current Platform - Where are we now? In detail.
- Mapping the Landscape - What does the world of genealogy look like?
- Identifying Projects - What are the existing projects being developed via the platform?
- Define an API Evangelist Strategy - Actually flushing out of a detailed strategy.
- External Public
- External Partner
- Internal Stakeholder
- Internal Company-Wide
- Identify Resources - What resource currently exist? What are needed?
- Content / Storytelling
- Execute - What does execution of an API evangelist strategy look like?
- Iterate - What does iteration look like for an API evangelism strategy.
AS with many providers, you don’t want to this to take 5 years, so how do you take a 3-5 year cycle, and execute in 12-18 months?
- Invest In Evangelist Resources - It takes a team of evangelists to build a platform
- External Facing
- Partner Facing
- Internal Facing
- Development Resources - We need to step up the number of resources available for platform integration.
- Code Samples & SDKs
- Embeddable Tools
- Content Resources - A steady stream of content should be flowing out of the platform, and syndicated everywhere.
- Short Form (Blog)
- Long Form (White Paper & Case Study)
- Event Budget - FamilySearch needs to be everywhere, so people know that it exists. It can’t just be online.
There is nothing easy about this. It takes time, and resources, and there are only so many elements you can automate when it comes to API evangelism. For something that is very programmatic, it takes more of the human variable to make the API driven platform algorithm work. With that said it is possible to scale some aspects, and increase the awareness, presence, and effectiveness of FamilySearch platform efforts, which is really what is currently missing.
While as the API Evangelist, I cannot personally execute on every aspect of an API evangelism strategy for FamilySearch, I can provide essential planning expertise for the overall FamilySearch API strategy, as well as provide regular checkin with the team on how things are going, and help plan the roadmap. The two things I can bring to the table that are reflected in this proposal, is the understanding of where the FamilySearch API effort currently is, and what is missing to help get FamilySearch to the next stage of its platform evolution.
When operating within the corporate or organizational silo, it can be very easy to lose site of how other organizations, and companies, are approach their API strategy, and miss important pieces of how you need to shift your strategy. This is one of the biggest inhibitors of API efforts at large organizations, and is one of the biggest imperatives for companies to invest in their API strategy, and begin the process of breaking operations out of their silo.
What FamilySearch is facing demonstrates that APIs are much more than the technical endpoint that most believe, it takes many other business, and political building blocks to truly go from API to platform.
- Multipart Uploads with parallelism
- Support new 5TB Object Size Limit
- Cyberduck 4.0 Is Out: Dropbox Support, Better Finder Integration (macstories.net)
- Easily Upload your Desktop Folders to Google Docs [Google Docs] (lifehacker.com)
- Back up Google Docs to your hardisk with CyberDuck (otterman.wordpress.com)
- RESTful API
- SDK & Code Libraries
- Terms and Conditions
- Case Studies
- Change Log
- Featured Apps
- API Registration
- Featured Apps
- Featured Apps
TechnologyPaypal provides a large number of tools for its API. However, this information is spread across several areas making it difficult to find all of it.
Documentation / Tools
- RESTful API
Support / Management
- API Reference
- Express Integration
- Getting Started
- How it Works
- SDK / Code Libraries
- 3rd Party Tools
- Case Sudies
- Phone Number
- Change Log
- Account Management
- App Showcase
Documentation / Tools
- RESTful API
Support / Management
- API Reference
- SDK / Code Libraries
- Getting Started
- Change Log
- Issue Tracker
If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.