RSS

API Communications News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

Please Refer The Engineer From Your API Team To This Story

I reach out to API providers on a regular basis, asking them if they have an OpenAPI or Postman Collection available behind the scenes. I am adding these machine readable API definitions to my index of APIs that I monitor, while also publishing them out to my API Stack research, the API Gallery, APIs.io, work to get them published in the Postman Network, and syndicated as part of my wider work as an OpenAPI member. However, even beyond my own personal needs for API providers to have a machine readable definition of their API, and helping them get more syndication and exposure for their API, having an definition present significantly reduces friction when on-boarding with their APIs at almost every stop along a developer’s API integration journey.

One of the API providers I reached out to recently responded with this, “I spoke with one of our engineers and he asked me to refer you to https://developer.[company].com/”. Ok. First, I spend over 30 minutes there just the other day. Learning about what you do, reading through documentation, and thinking about what was possible–which I referenced in my email. At this point I’m guessing that the engineer in question doesn’t know what an OpenAPI or Postman Collection is, they do not understand the impact these specifications are having on the wider API ecosystem, and lastly, I’m guessing they don’t have any idea who I am(ego taking control). All of which provides me with the signals I need to make an assessment of where any API is in their overall journey. Demonstrating to me that they have a long ways to go when it comes to understanding the wider API landscape in which they are operating in, and they are too busy to really come out of their engineering box and help their API consumers truly be successful in integrating with their platform.

I see this a lot. It isn’t that I expect everyone to understand what OpenAPI and Postman Collections are, or even know who I am. However, I do expect people doing APIs to come out of their boxes a little bit, and be willing to maybe Google a topic before responding to question, or maybe Google the name of the person they are responding to. I don’t use a gmail.com address to communicate, I am using apievangelist.com, and if you are using a solution like Clearbit, or other business intelligence solution, you should always be retrieving some basic details about who you are communicating with, before you ever respond. That is, you do all of this kind of stuff if you are truly serious about operating your API, helping your API consumers be more successful, and taking the time to provide them with the resources they need along the way–things like an OpenAPI, or Postman Collections.

Ok, so why was this response so inadequate?

  • No API Team Present - It shows me that your company doesn’t have any humans their to support the humans that will be using your API. My email went from general support, to a backend engineer who doesn’t care about who I am, and about what I need. This is a sign of what the future will hold if I actually bake their API into my applications–I don’t need my questions lost between support and engineering, with no dedicated API team to talk to.
  • No Business Intelligence - It shows me that your company has put zero thought into the API business model, on-boarding, and support process. Which means you do not have a feedback loop established for your platform, and your API will always be deficient of the nutrients it needs to grow. Always make sure you conduct a lookup based upon on the domain, or Twitter handle or your consumers to get the context you need to understand who you are talking to.
  • Stuck In Your Bubble - You aren’t aware of the wider API community, and the impact OpenAPI, and Postman are having on the on-boarding, documentation, and other stops along the API lifecycle. Which means you probably aren’t going to keep your platform evolving with where things are headed.

Ok, so why should you have an OpenAPI and Postman Collection?

  • Reduce Onboarding Friction - As a developer I won’t always have the time to spend absorbing your documentation. Let me import your OpenAPI or Postman Collection into my client tooling of choice, register for a key and begin making API calls in seconds, or minutes. Make learning about your API a hands on experience, something I’m not going to get from your static documentation.
  • Interactive API Documentation - Having a machine readable definition for your API allows you to easily keep your documentation up to date, and make it a more interactive experience. Rather than just reading your API documentation, I should be able to make calls, see responses, errors, and other elements I will need to truly understand what you do. There are plenty of open source interactive API documentation solutions that are driven by OpenAPI and Postman, but you’d know this if you were aware of the wider landscape.
  • Generate SDKs, and Other Code - Please do not make me hand code the integration with each of your API endpoints, crafting each request and response manually. Allow me to autogenerate the most mundane aspects of integration, allowing OpenAPI and Postman Collection to act as the integration contract.
  • Discovery - Please don’t expect your potential consumers to always know about your company, and regularly return to your developer.[company].com portal. Please make your APIs portable so that they can be published in any directory, catalog, gallery, marketplace, and platform that I’m already using, and frequent as part of my daily activities. If you are in my Postman Client, I’m more likely to remember that you exist in my busy world.

These are just a few of the basics of why this type of response to my question was inadequate, and why you’d want to have OpenAPI and Postman Collections available. My experience on-boarding will be similar to that of other developers, it just happens that the application I’m developing are out of the normal range of web and mobile applications you have probably been thinking about when publishing your API. But this is why we do APIs, to reach the long tail users, and encourage innovate around our platforms. I just stepped up and gave 30 minutes of my time (now 60 minutes with this story) to learning about your platform, and pointing me to your developer.[company].com page was all you could muster in return?

Just like other developers will, if I can’t onboard with your API without friction, and I can’t tell if there is anyone home, and willing to give me the time of day when I have questions, I’m going to move on. There are other platforms that will accommodate me. The other downside of your response, and me moving on to another platform, is that now I’m not going to write about your API on my blog. Oh well? After eight years of blogging on APIs, and getting 5-10K page views per day, I can write about a topic or industry, and usually dominate the SEO landscape for that API search term(s) (ego still has control). But…I am moving on, no story to be told here. The best part of my job is there are always stories to be told somewhere else, and I get to just move on, and avoid the friction wherever possible when learning how to put APIs to work.

I just needed this single link to provide in response to my email response, before I moved on!


My API Storytelling Depends On The Momentum From Regular Exercise And Practice

I’ve been falling short of my normal storytelling quotas recently. I like to have at least 3 posts on API Evangelist, and two posts on Streamdata.io each day. I have been letting it slip because it was summer, but I will be getting back to my regular levels as we head into the fall. Whenever I put more coal in the writing furnace, I’m reminded of just how much momentum all of this takes, as well as the regular exercise and practice involved, allowing me to keep pace in the storytelling marathon across my blog(s).

The more stories I tell, the more stories I can tell. After eight years of doing this, I’m still surprised abut what it takes to pick things back up, and regain my normal levels of storytelling. If you make storytelling a default aspect of doing work each day, finding a way to narrate your regular work with it, it is possible to achieve high volumes of storytelling going out the door, generating search engine and social media traffic. Also, if you root your storytelling in the regular work you are already doing each day, the chances it will be meaningful enough for people to tune in only increases.

My storytelling on API Evangelist is important because it helps me think through what I’m working on. It helps me become publicly accessible by generating more attention to my work, firing up new conversations, and reenforces the existing ones I’m already having. When the storytelling slows, it means I’m either doing a unhealthy amount of coding or other work, or my productivity levels are suffering overall. This makes my API storytelling a heartbeat of my operations, and a regular stream of storytelling reflects how healthy my heartbeat is from regular exercise, and usage of my words (instead of code).

I know plenty of individuals, and API related operations that have trouble finding their storytelling voice. Expressing that they just don’t have the time or resources to do it properly. Regular storytelling on your blog is hard to maintain, even with the amount of experience I have. Regardless, it is something you just have to do, and you will have mandate that storytelling just becomes a default aspect of your work each day. If you work on it regularly, eventually you’ll find your voice. However, there will always be times where you lose it, and have to work to regain it again. It is just the fight you will have to fight, but ultimately if you continue, it will be totally worth it. I’m very thankful I’ve managed to keep it going for over eight years now, resulting in a pretty solid platform that enables me to do what I do.


The Basics Of The VA API Feedback Loop

I’m working to break down the moving parts of API efforts over at the VA, and work to provide as much relevant feedback as I possibly can. One of the components I’m wanting to think about more is the feedback loop for the VA API efforts. The feedback loop is one of the most essential aspects of doing an API, and is quickly can become one of the most debilitating, paralyzing, and nutrient starving aspects of operating an API platform if done wrong, or non-existent. However, the feedback loop is also one of the most valuable reasons for wanting to do APIs in the first place, providing the essential feedback you will need from consumers, and the entire API ecosystem to move the API forward in a meaningful way.

Current Seeds Of The VA API Feedback Loop Current the VA is supporting the VA API developer portal using GitHub Issues and email. I mentioned in my previous review of the VA API developer portal that the personal email addresses provided for email support should be generalized, sharing the load when it comes to email support for the platform. Next, I’d like to address the usage of GitHub issues for support, along with email, and step back to look at how this contributes to, or could possibly take away from the overall feedback loop for the VAPI API effort. Defining what the basics of an API feedback loop for the VA might be.

Expanding Upon The VA Usage Of GitHub Issues I support the usage of GitHub issues for public support of any API related project. It provides a simple, observable way for anyone to get support around the VA APIs. While I’m guessing it was accidental, I like the specialization of the current repo, and usage of GitHub issues, and that it being dedicated to VA API clients and their needs. I’d encourage this type of continued focus when it comes to establishing additional feedback loops, keeping them dedicated to a specific aspect of operating on the VA API platform. It is something that might seem a little messy at first, but could easily be managed with the proper strategy, and usage of GitHub APIs, which I’ll highlight below.

Makes API Operations More Public And Observable One of the most important reasons for using GitHub as the cornerstone of the VA API feedback loop is that it allows for transparent, observable, auditable operation of the feedback loop across the VA API platform. One of the critical aspects of the overall health of the VA API platform in the future, will be feedback loops being as open as they possibly can. Of course, there are some feedback loops that should remain private, which GitHub issues can accommodate, but whenever possible the feedback loop for the VA API platform should be in the public domain, allowing all stakeholders, veterans, and the public to actively participate in the process. In a way that can ensures every aspect of API operations is documented, and auditable, providing as much accountability as possible across VA API operations.

Allowing For More Modular Organization Of Feedback Loops Using GitHub Issues for the deployment, management, and organization of more modular feedback loops. Treating your feedback loops just as you would your APIs, making them small, meaningful, and doing one thing and doing it well. Any GitHub repository can have its own GitHub Issues, allowing for the deployment of specialized feedback loops based upon single project that are part of different organizational groups. Beyond the modularity available when you leverage GitHub repositories, and organize them within GitHub Organizations, Github Issues can also be tagged, allowing for even more meaningful organization of feedback as it comes in, tagging and bagging for consideration as part of the road map, and other decision making processes that will be feeding off the VA API platform’s feedback loop.

Enabling Feedback Loop Automation With The GitHub API Another benefit of using GitHub Issues as an API feedback loop building block, is that they also have an API. Allowing for the automation of all aspects of the VA API platform feedback loop(s). The GitHub API can be used to aggregate, automate, audit, and work with the Github Issues for any GitHub organization and repo the VA has access to. Providing the ability to manage not just the Github Issues for a single GitHub repository, but for the orchestration of feedback loops across many different GitHub repositories, managed by many different GitHub organizations. Establishing a distributed feedback loop system in which VA API leadership can use to coordinate with different internal, agency, partner, vendor, or public stakeholder(s) at scale, across many different projects, and components of the VA API platform.

Augmenting Public Feedback With Private Github Repos While it is critical that as many of the feedback loops across the VA API platform are publicly accessible, and observable by everyone as possible, it is also important that there are private channels for communication around some of the components of the platform. This is another reason why GitHub Issues can work as a building block for not just public feedback loops, but also being able to operate feedback loops as well. Taking advantage of private repositories when it comes to establishing modular, automated, and private conversations to occur around certain VA API platform projects. Balancing the public aspects of the platform, with feedback loops amongst trusted groups, while still leveraging GitHub for delivering the identity and access management aspects of governing private VA feedback loops.

Extending Private GitHub Repos With Email Support Beyond the private GitHub repositories, and using their issues to facilitate private conversations, it always makes sense to have a generalized and dedicated email account as part of the feedback loop for any API platform. Providing another private, but also a vendor neutral way of supporting the platform. People just are familiar with email, and it makes sense to have a general account that is managed by many individuals who are coordinating around platform operations. Make it easy to provide feedback around the API the VA API operations, and support anyone participating within the VA API ecosystem.

Auditing, Documenting, And Reporting Upon The VA Feedback Loop I suggested in my review of the VA API platform that email should be standardized and delivered via a dedicated email account, so that multiple stakeholders can participate in support of the platform from a VA operational perspective. This way emails can be tagged, organized, and archived in support of the larger VA API feedback loop. Making sure all questions get answered, and that they are also contributed to the evolution of the platform. Something that can also be done via the automation described earlier using the GitHub API. Allowing all threads, across any project and organization to be audited, documented, and reported upon across all VA API operations. Ensuring that their is transparency, observability, and accountability across the VA API platform feedback loop.

Have A Strategy In Place For The VA API Feedback Loop GitHub Issues and email are the two basic building blocks of any API platform, and I support the VA starting their official journey here. I think GitHub makes for an essential building block of any API platform, when used right. It just helps to have a plan in place for when a repo’s GitHub is included in the overall feedback loop framework, and the organization and prioritization of the conversation going on there. GitHub Issues spread across many different GitHub repositories, without any real strategy to how they are organized, tagged, and engaged with can seem overwhelming, and become a mess. However, with a little planning, and the establishment of even the most basic approach to managing them, can help develop a pretty robust feedback loop across the VA API platform, that follows the lead of how open source software gets delivered.

Consider Other API Feedback Loop Building Blocks I wanted to keep this post just about the basics of the feedback loop for the VA, or for any API platform–GitHub Issues, and email. However, I’d also like suggest the consideration of some other building blocks, to help augment GitHub Issues, providing some other direct, and indirect approaches to operating the VA API platform feedback loop:

  • FAQ - Providing a frequently asked question that is an aggregate of all the questions that get asked across the GitHub issues, and via email.
  • Newsletter - Providing a regular channel for updating platform stakeholders, via a structured email newsletter. Offering up private, and public editions, targeting different groups.
  • Road Map - Publishing a road map regarding what is getting built across all projects included within the VA API platform perimeter, aggregating GitHub Issues that evolve as part of the feedback loop and get tagged as milestones for adding to the road map.

I’m always hesitant to make suggestions about where to go next, when an organization is just getting started on their API journey. However, I think the VA team knows when to ignore my advice, and when they can cherry pick the things they want to include in their strategy. I just want to make sure I provide as much constructive criticism about what is there, and constructive feedback around what can be invested in next.

Hopefully this post provides a basic overview of the VA API platform feedback loop. Expands on what they are already doing, but shines a light on some of the positive aspects of using GitHub for the VA API platform feedback loop. I was the one who worked with the former VA CIO Marina Martin (@MarinaNitze) to get the the VA GitHub organization setup back in 2013. So it makes me happy to see it being used as a cornerstone of the VA API platform. I am happy to give feedback on how they can continue to put the powerful platform to such good use.


Getting Email Updates From The API Providers I Care About Is One Way To Stay In Sync

I do not like email. I do not have a good relationship with my inbox. However, it is one of those ubiquitous tools I have to use, and understand the value it can bring to my world. The goal is to not let my inbox control me too much, as my it is is often a task list that other people think they can control. With all that said, I’m finding renewed value in email newsletters, on several fronts. While I’d prefer to get updates via Atom, I’m warming up to receiving updates from API providers, and API service providers in my inbox.

I am an active subscriber to the REST API Notes Newsletter, API Developer Weekly, and other relative newsletters. I’m also finding myself opening up more emails from the API providers, and service providers I’m registered with. Historically, I often see email as a nuisance, but I’m beginning to see emails from the companies I’m paying attention to as a healthy signal. Increasingly, it is a signal that I’m using to understand the overall health of a platform, and yet another signal that will go silent when a platform isn’t supporting their user base, and potentially running out of funding for their operations.

I recently wrote a script that harvests emails from my inbox, and tracks on the communications occurring with API providers and service providers I am monitoring. At it’s most basic, it is a heartbeat that I can use to tell when an API provider or service provider is still alive. After that, I’m looking at harvest URLs, and other data, and use the signals to float an API provider or service provider up on my list. It can be tough to remember to tune into what is going on across hundreds and thousands of APIs, and any signal I can harvest to help companies float up is a positive thing. Hopefully it is something that will also incentivize API provider and service providers to tell more stories via email, as well as their blog and social media.

Email is still one of my least favorite signals out there, but I’m beginning to realize there is still a lot of value to be found within my inbox. Having a regular newsletter is something I’m going to write about more, encouraging more API providers and service providers to provide. I think more companies, institutions, and government agencies feel comfortable telling stories via email, over a public blog. It may not be my preferred medium of choice, but I know that it takes a diverse set of channels to reach a large audience, and who am I to only use the ones I like the most.


Sharing Your API First Principles

I’ve been publishing regular posts from the API design guides of API providers I’ve been studying. API providers who publish their API design guides tends to be further along in their API journey. These API providers tend to have more experience and insight, and are often worth studying further, and learning from. I’ve bee getting a wealth of valuable information from the German fashion and technology company Zalando, who has shared some pretty valuable API first principles.

In a nutshell Zalando’s API First principles focuses on two areas:

  • define APIs outside the code first using a standard specification language
  • get early review feedback from peers and client developers

By defining APIs outside the code, Zalando wants to facilitate early review feedback and also a development discipline that focus service interface design on:

  • profound understanding of the domain and required functionality
  • generalized business entities / resources, i.e. avoidance of use case specific APIs
  • clear separation of WHAT vs. HOW concerns, i.e. abstraction from implementation aspects — APIs should be stable even if we replace complete service implementation including its underlying technology stack

Moreover, API definitions with standardized specification format also facilitate:

  • single source of truth for the API specification; it is a crucial part of a contract between service provider and client users
  • infrastructure tooling for API discovery, API GUIs, API documents, automated quality checks

Their guidance continues by stating:

An element of API rirst are also a review process to get early review feedback from peers and client developers. Peer review is important for us to get high quality APIs, to enable architectural and design alignment and to supported development of client applications decoupled from service provider engineering life cycle.

It is important to learn, that API First is not in conflict with the agile development principles that we love. Service applications should evolve incrementally — and so its APIs.

Of course, our API specification will and should evolve iteratively in different cycles; however, each starting with draft status and early team and peer review feedback. API may change and profit from implementation concerns and automated testing feedback. API evolution during development life cycle may include breaking changes for not yet productive features and as long as we have aligned the changes with the clients. Hence, API First does not mean that you must have 100% domain and requirement understanding and can never produce code before you have defined the complete API and get it confirmed by peer review.

On the other hand, API First obviously is in conflict with the bad practice of publishing API definition and asking for peer review after the service integration or even the service productive operation has started. It is crucial to request and get early feedback — as early as possible, but not before the API changes are comprehensive with focus to the next evolution step and have a certain quality (including API Guideline compliance), already confirmed via team internal reviews.

Zalando get’s at one of the reasons I’m a big advocate for a design, mock, and document API lifecycle, forcing developers to realize their API vision, get feedback, and iterate upon their designs, before they ever consider moving into a production environment. I feel like there are many views of what API first means, and many do not focus on the process, and obsess too much on just API design. I think Zalando’s API design guide reflects this, where the guide is more about API guidance, than it is just about the design of an API. It provides a wealth of wisdom and knowledge, but is still labeled as being about API design.

Similar to pushing companies, organizations, institutions, and government agencies to do in APIs in the first place, I’m encouraging more of them to begin publishing their API design guides as part of their journey. It seems like it is an important part of being able to articulate not just the API design guidelines, but also a place to articulate the overall reasons behind doing APIs in the first place, and the principle, ethics, and process surrounding how we are doing APIs across the disparate teams that make things go around in our worlds.


How Big Or Small Is An API?

I am working to build out the API Gallery for Streamdata.io, profiling a wide variety of APIs for inclusion in the directory, adding to the wealth of APIs that could be streamed using the service. As I work to build the index, I’m faced with the timeless question regarding, what is an API? Not technically what an API does, but what is an API in the context of helping people discover the API they are looking for. Is Twitter an API, or is the Twitter search/tweets path an API? My answer to this question always distills down to a specific API path, or as some call it an API endpoint. Targeting a specific implementation, use case, or value generated by a single API provider.

Like most things in the API sector, words are used interchangeably, and depending on how much experience you have in the business, you will have much finer grained definitions about what something is, or isn’t. When I’m talking to the average business user, the Twitter API is the largest possible scope–the entire thing. In the context of API discovery, and helping someone find an API to stream or to solve a specific problem in their world, I’m going to resort to a very precise definition–in this case, it is the specific Twitter API path that will be needed. Depending on my audience, I will zoom out, or zoom in on what constitutes a unit of API. The only consistency I’m looking to deliver is regarding helping people understand, and find what they looking for–I’m not worried about always using the same scope in my definition of what an API is.

You can see an example of this in action with the Alpha Vantage market data API I’m currently profiling, and adding to the gallery. Is Alpha Vantage is a single API, or 24 separate APIs? In the context of the Streamdata.io API Gallery, it will be 24 separate APIs. In the context of telling the story on the blog, there is a single Alpha Vantage API, with many paths available. I don’t want someone searching specifically for a currency API to have to wade through all 24 Alpha Vantage paths, I want them to find specifically the path for their currency API. When it comes to API storytelling, I am fine with widening the scope of my definition, but when it comes to API discovery I prefer to narrow the scope down to a more granular unit of value.

For me, it all comes down the definition of what an API is. It is all about applying a programmatic interface. If I’m applying in a story that targets a business user, I can speak in general terms. If I’m applying to solve a specific business problem, I’m going to need to get more precise. This precision can spin out of control if you are dealing with developers who tend to get dogmatic about programming languages, frameworks, platforms, and the other things that make their worlds go round. I’m not in the business of being “right”. I’m in the business of helping people understand, and solve the problems they have. Which gives me a wider license when it comes to defining how big or small an API can be. It is a good place to be.


A Dedicated Guest Blogger Program For Your API

I get endless waves of people wanting to “guest post” on API Evangelist. It isn’t something I’m interested in because of the nature of API Evangelist, and that it really is just my own stream of consciousness, and not about selling any particular product or service. However, if you are an API provider, looking for quality content for your blog, having a formal approach to managing guest bloggers might make sense. Sure, you don’t want to accept all the spammy requests that you will get, but with the right process, you could increase the drumbeat around your API, and build relationships with your partners and API consumers.

There is an example of this in action at the financial data marketplace Intrinio, with their official blogger program. The blogging program for the platform has a set of established benchmarks defined by the Intrinio team, to establish quality for any post that is accepted as part of the program. What I find really interesting, is that they also offer three months of free access to data feeds for API consumers who publish a post via the platform. “Exceptional” participants in the program may have their free access extended, and ALL participants will receive discounts on paid data access subscriptions via the platforms APIs.

This is the type of value exchange I like to see via API platforms. Too many APIs are simple one way streets, paying for GET access to data, content, media, and algorithms. API management shouldn’t be just about about metering one way access and charging for it. Sensible API management should measure value exchange around ALL platform resources, including blog and forum posts, and other activities API providers should be incentivizing via their platforms. This is one of the negative side effects of REST I feel–too much focus on resources, and not about the events that occur around these resources. Something we are beginning to move beyond in an event-driven API landscape.

Next, I will be profiling the concept of having dedicated data partner programs for your API platform. Showcasing how your API consumers can submit their own data APIs for resell alongside your own resources. In my opinion, every API platform should be opening up every resource for GET, POST, PUT, and DELETE, as well as allow for the augmenting, aggregation, enrichment, and introduction of other data, content, media, and algorithms, to add more value to what is already going on. Opening up a dedicated guest blogger program modeled after Intrinio’s is a good place to start. Learning about how to set up guidelines and benchmarks for submission, and evolving your API management to allow for incentivizing of participation. Once you get your feet wet with the blog, you may want to expand to other resources available via the platform, making your API operations a much more community thing.


Helping Define Stream(Line)Data.io As More Than Just Real Time Streaming

One aspect of my partnership with Streamdata.io is about helping define what it is that Streamdata.io does–internally, and externally. When I use any API technology I always immerse myself in what it does, and understand every detail regarding the value it delivers, and I work to tell stories about this. This process helps me refine not just how I talk about the products and services, but also helps influence the road map for what the products and services deliver. As I get intimate with what Streamdata.io delivers, I’m beginning to push forward how I talk about the company.

The first thoughts you have when you hear the name Streamdata.io, and learn about how you can proxy any existing JSON API, and begin delivering responses via Server-Sent Events (SSE) and JSON Patch, are all about streaming and real time. While streaming of data from existing APIs is the dominant feature of the service, I’m increasingly finding that the conversations I’m having with clients, and would be clients are more about efficiencies, caching, and streamlining how companies are delivering data. Many API providers I talk to tell me they don’t need real time streaming, but at the same time they have rate limits in place to keep their consumers from polling their APIs too much, increasing friction in API consumption, and not at all about streamlining it.

These experiences are forcing me to shift how I open up conversations with API providers. Making real time and streaming secondary to streamlining how API providers are delivering data to their consumers. Real time streaming using Server-Sent Events (SSE) isn’t always about delivering financial and other data in real time. It is about delivering data using APIs efficiently, making sure only what what has been updated and needed is delivered when it is needed. The right time. This is why you’ll see me increasingly adding (line) to the Stream(line)data.io name, helping focus on the fact that we are helping streamline how companies, organizations, institutions, and government agencies are putting data to work–not just streaming data in real time.

I really enjoy this aspect of getting to know what a specific type of API technology delivers, combined with the storytelling that I engage in. I was feeling intimidated about talking about streaming APIs with providers who clearly didn’t need it. I’m not that kind of technologist. I just can’t do that. I have to be genuine in what I do, or I just can’t do it. So I was pleasantly surprised to find that conversations were quickly becoming about making things more efficient, over actually ever getting to the streaming real time portion of things. It makes what I do much easier, and something I can continue on a day to day basis, across many different industries.


API Transit Basics: Communication

This is a series of stories I’m doing as part of my API Transit work, trying to map out a simple journey that some of my clients can take to rethink some of the basics of their API strategy. I’m using a subway map visual, and experience to help map out the journey, which I’m calling API transit–leveraging the verb form of transit, to describe what every API should go through.

Moving further towards the human side of this API transit journey, I’d like to focus on one of the areas that I see cause the failure and stagnation of many API operations–basic communications. A lack of communication, and one way communication are the most common contributors to APIs not reaching their intended audience, and establishing much needed feedback loops that contribute to the API road map. This portion of the journey is not rocket science, it just take stepping back from the tech for a moment and thinking about the humans involved.

When you look at Twitter, Twilio, Slack, Amazon, SalesForce, and the other leading API pioneers you see a handful of communication building blocks present across all of them. These are just a few of the communication elements that should be present in both internal, as well as external or publicly available API operations.

  • Blogs - Make blogs a default part of ALL portals, whether partner, public, or internal. They don’t have to be grand storytelling vehicles, but can be used as part of communicating around updates within teams, groups, and for projects.
  • Twitter - Not required for internally focused APIs, but definitely essential if you are running a publicly available API.
  • Github - Github enables all types of communication around repos, issues, wikis, and other aspects of managing code, definitions, and content on the social coding platform.
  • Slack - Leverage Slack for communicating around APIs throughout their life cycle, providing a history of what has occurred from start to finish.
  • API Path IDs - Establish common DNS + API path identifiers for creating threads around each API, allowing for discussions on BitBucket, Slack, and in emails when it comes to each API.

I recommend making communication a default requirement for all API owners, stewards, and evangelist who work internally, as well as externally. Ensuring that there is communication around the existence, and life cycle of an API, and helping make sure there is awareness across teams, as well as up the management chain. It’s not rocket science, but it is essential to doing business around programmatic interfaces. You don’t have to be a poet, or prolific blogger, but you do have to care about keeping your API consumers informed.

Communication around API operations is easily overlooked, and difficult to recreate down the road. Just put it in place from the beginning, and don’t worry about activity levels. Be genuine in what you publish and share, and be responsive and open with your readers. Whenever possible make things a two-way street, allowing readers to share their thoughts. Track everything, and route it back into your road map, leveraging all communications as part of the API feedback loop.


Building An API Partner Program For Streamdata.io

I am working with Streamdata.io on a number of fronts when it comes to our partnership in 2018. One of the areas I’m helping them build, is the partner program around their API service. I’m taking what I’ve learned from studying the partner programs of other leading APIs, and I am pulling it together into coherent strategy that Streamdata.io can put to work over time. Like any other area of operations, we are going to start small, and move forward incrementally, making sure we are doing this in a sensible, and pragmatic way.

First up, are the basics. What are we trying to accomplish with the Streamdata.io partner program. I want to have a simple and concise answer to what their partner program does, and is designed to accomplish.

The Streamdata.io partner program is designed to encourage deeper engagement with companies, organizations, institutions, and government agencies that are putting Streamdata.io solutions to work, or are already operating within industries where Streamdata.io services will compliment what they are already doing. This partner program is meant to encourage continued participation by our customers, through offering them exposure, storytelling opportunities, referrals, and even new revenue opportunities. The Streamdata.io partner program is open to the public, just reach out and we’ll let you know if there is a fit between what our organizations are doing.

It is a first draft, and not the official description, but it takes a crack at describing why we are doing it, and some about what it is. After looking through our existing partner list, and talking about the objectives of the Streamdata.io partner program we have settled in on a handful of types of partner opportunities available with the program.

  • OEM - Our deeply integrated partners who offer a white label version Streamdata.io services to their customers.
  • Mutual Lead Referral - Partners who we are looking to generate business for each other, sending leads back and forth to help drive growth.
  • Co-Marketing - Partners who include Streamdata.io in their marketing, as well as participate in our partner marketing opportunities.
  • Reseller / Distribution - Streamdata.io customers who are authorized to resell and distribute Streamdata.io services alongside their work.
  • Marketplace - Making sure Streamdata.io is in leading application and API marketplaces, such as Amazon Marketplace.
  • System Integrators - Agencies who offer consulting services, and are up to speed on what Streamdata.io does, and can intelligently offer to their customers.

We are currently going through our existing partner list, and preparing the website page that articulates what the partner program is, and who our existing partners are. Then we are looking to craft a road map for what the future of the Streamdata.io partner program will look like.

  • Storytelling - Opportunities to include their products, services, and use cases in storytelling on the Streamdata.io blog, and via white papers, guides or other formats.
  • Newsletter - Including partners in the newsletter we are launching later this month, allowing for regular exposure via this channel.
  • Testimonials - Getting, and showcasing the testimonials of our partners, and also giving testimonials regarding their solutions.
  • Case Studies - Crafting of full case studies about how a partner has applied Streamdata.io solutions within their products, and services.

We have other ideas, but this represents what we are capable of in the first six months of the Streamata.io partner program. It gets the program off the ground, and has us reaching out to existing partners, letting them know of the opportunities on the table. It begins showcasing the program, and existing partners on the showcase page via the website, and invites other customers, and potential customers to participate. Once we get the storytelling drumbeat going for existing partners, and start driving traffic and links their way, we can look at what it will take to attract and land new partners as part of the effort.

Adding another dimension to this conversation, the telling of this story is directly related to my partnership with Streamdata.io. Which means I am a Streamdata.io partner, and they are one of my partners, and this storytelling is part of the benefits of being an API Evangelist partner. As part of the work on the Streamdata.io partner program we are looking at not just how the storytelling can be executed on Streamdata.io, but also here on API Evangelist. We are also looking into how the reseller, distribution, and integrators tiers of partnership can coexist with the consulting, speaking, and workshops I’m doing as part of my work as the API Evangelist. We will be revisiting this topic each week, and when I have relevant additions, or movements, I will tell the story here on the blog, as I do with most of my work.


Postman As A Live Coding Environment In Presentations At APIStrat

We just wrapped up the 8th edition of APIStrat in Portland, Oregon this last week. I’ll be working through my notes, and memory of the event in future posts, but thing that stood out for me was the presence of Postman at the event. No, I’m not talking about their booth, and army of evangelists and company reps on site–although this was the first time I’ve seen them out in such force. I’m talking about the usage of the API development environment by presenters, as a live coding environment in their talks, replacing the command line and browser for how you demonstrate the magic of APIs to your audience.

On the first day of the conference I attended two separate workshops where Postman was the anchor for the talk. As they worked their way through their slides they kept switching back to the Postman application to show some sort of real results, from an actual, or mocked API. It is the new live coding environment for API evangelist, architects, designers, developers, and security folks. It is the quickest way to go from API concept, to demonstrating API solutions in any presentation. What I also really like is that it transcends any single programming language. In the past, I’ve always hated when someone would bust out some .NET code to show an API call, or something very language or platform specific. Postman reflects a more API way of doing things, that is elevated above the dogma of any single programming language community.

I am beginning to use Postman and Restlet client in my API training and curriculum more. Directing my users to actually try something out in the API client before moving on to the next step. It is kind of becoming the new interactive API documentation, but something that is linkable from any story, training materials, or incorporated directly into a live talk. As an evangelist it is yet another reason to maintain OpenAPI definitions and Postman Collections of all your most common API use cases. So that you can find, and include all your relevant API calls directly into your storytelling. I’m going to start exploring the viability of doing this with non-developers, and folks who aren’t familiar with Postman yet. I have a feeling that within the echo chamber there is a sort of familiarity taking hold when it comes to Postman, but I want to make sure the environment isn’t a shock for newcomers, and is something that can help users go from zero to API response in 60 seconds or less.

It is interesting to watch the API space evolve, and what tools become common place like Postman has become. For a while the Apigee API Explorer was common place, but has quickly fallen into the background. Swagger UI has enjoyed dominance when it comes to interactive API documentation, but is something I see beginning to shift. I’m hoping that API clients and development environment like Postman, Restlet, PAW, and Insomnia continue to develop mindshare. These tools are all good for helping make API integration easier, but as I saw at APIStrat, they can also help us communicate with our readers, and audience about what is possible with APIs. Speaking to as wide of a group as possible, elevating above any single programming language, and just speaking API.


Your API Road Map Helps Others Tell Stories About Your API

There are many reasons you want to have a road map for your API. It helps you communicate with your API community where you are going with your API. It also helps you have a plan in place for the future, which increases the chances you will be moving things forward in a predictable and stable way. When I’m reviewing and API I don’t see a public API road map available, I tend to give them a ding on the reliability and communication for their operations. One of the reasons we do APIs is to help us focus externally with our digital resources, which communication plays an important role, and when API providers aren’t communicating effectively with their community, there are almost always other issues right behind the scenes.

A road map for your API helps you plan, and think through how and what you will be releasing for the foreseeable future. Communicating this plan externally helps force you think about your road map in context of your consumers. Having a road map, and successfully communicating about it via a blog, on Twitter, and other channels helps keep your API consumers in tune with what you doing. In my opinion, an API road map is an essential building block for all API providers, because it has direct value on the health of API operations, but because it also provides an external sign of the overall health of a platform.

Beyond the direct value of having an API road map, there are other reasons for having one that will go beyond just your developer community. In a story in Search Engine Land about Google Posts, the author directly references the road map as part of their storytelling. “In version 4.0 of the API, Google noted that “you can now create Posts on Google directly through the API.” The changelog include a bunch of other features, but the Google Posts is the most notable.” Adding another significant dimension to the road map conversation, and helping out with SEO, and other API marketing efforts.

As you craft your road map you might not be thinking about the fact that people might be referencing it as part of stories about your platform. I wouldn’t sweat it too much, but you should at least make sure you are having a conversation about it with your team, and maybe add an editorial cycle to your road map publishing process. Making sure what you publish will speak to your API consumers, but also make for quotable nuggets that other folks can use when referencing what you are up to. This is all just a thought I am having as I’m monitoring the space, and reading about what other APIs are are up to. I find road maps to be a critical piece of the API communication, and support, and I depend on them to do what I do as the API Evangelist. I just wanted to let you know how important your API road map is, so you don’t forget to give it some energy on a regular basis.


AdWords API Release and Sunset Schedule For 2018

APIs are not forever, and eventually will go away. The trick with API deprecation is to communicate clearly, and regularly with API consumers, making sure they are prepared for the future. I’ve been tracking on the healthy, and not so healthy practices when it comes to API deprecation for some time now, but felt like Google had some more examples I wanted to add to our toolbox. Their approach to setting expectations around API deprecation is worthy of emulating, and making common practice across industries.

The Google Adwords API team is changing their release schedule, which in turns impacts the number of APIs they’ll support, and how quickly they will be deprecating their APIs. They will be releasing new versions of the API three times a year, in February, June and September. They will also be only supporting two releases concurrently at all times, and three releases for a brief period of four weeks, pushing the pace of API deprecation alongside each release. I think that Google’s approach provides a nice blueprint that other API provides might consider adopting.

Adopting an API release and sunset schedule helps communicate the changes on the horizon, but it also provides a regular rhythm that API consumers can learn to depend on. You just know that there will be three releases a year, and you have a quantified amount of time to invest in evolving integration before any API is deprecated. It’s not just the communication around the roadmap, it is about establishing the schedule, and establishing an API release and sunset cadence that API consumers can be in sync with. Something that can go a lot further than just publishing a road map, and tweeting things out.

I’ll add this example to my API deprecation research. Unfortunately the topic is one that is widely communicated around in the API space, but Google has long a strong player when it comes to finding healthy API deprecation examples to follow. I’m hoping to get to the point soon where I can publish a simple guide to API deprecation. Something API providers can follow when they are defining and deploying their APIs, and establish a regular API release and deprecation approach that API developers can depend on. It can be easy to get excited about launching a new API, and forget all about it’s release and deprecation cycles, so a little guidance goes a long way to helping API providers think about the bigger picture.


Their Security Practices Are Questionable But Their Communication Is Unacceptable

I study the API universe every day of the week, looking for common patterns in the way people are using technology. I study almost 100 stops along the API lifecycle, looking for healthy practices that companies, organizations, institutions, and government agencies can follow when dialing in their API operations. Along the way I am also looking for patterns that aren’t so healthy, which are contributing to many of the problems we see across the API sector, but more importantly the applications and devices that they are delivering valuable data, content, media, and algorithms to.

One layer of my research is centered around studying API security, which also includes keeping up with vulnerabilities and breaches. I also pay attention to cybersecurity, which is a more theatrical version of regular security, with more drama, hype, and storytelling. I’ve been reading everything I can on the Equifax, Accenture, and other scary breaches, and like the other areas of the industry I track on, I’m beginning to see some common patterns emerge. It is something that starts with the way we use (or don’t use) technology, but then is significantly amplified by the human side of things.

There are a number of common patterns that contribute to these breaches on the technical side, such as not enough monitoring, logging, and redundancy in security practices. However, there are also many common patterns emerging from the business approach by leadership during security incidents, and breaches. These companies security practices are questionable, but I’d say the thing that is the most unacceptable about all of these is the communication around these security events. I feel like they demonstrate just how dysfunctional things are behind the scenes at these companies, but also demonstrate their complete lack of respect and concern for individuals who are impacted by these incidents.

I am pretty shocked by seeing how little some companies are investing in API security. The lack of conversation from API providers about their security practices, or lack of, demonstrates how much work we still have to do in the API space. It is something that leaves me concerned, but willing to work with folks to help find the best path forward. However, when I see companies do all of this, and then do not tell people for months, or years after a security breach, and obfuscate, and bungle the response to an incident, I find it difficult to muster up any compassion for the situations these companies have put themselves in. Their security practices are questionable, but their communication around security breaches is unacceptable.


A Guest Blogger Program To Create Unique Content For Your API

Creating regular content for your blog is essential to maintaining a presence. If you don’t publish regularly, and refresh your content, you will find your SEO, and wider presence quickly becoming irrelevant. I understand that unlike me, many of you have jobs, and responsibilities when it comes to operating your APIs, and carving out the time to craft regular blog posts can be difficult. To help you in your storytelling journey I am always looking for other stories to help alleviate your pain, while helping keep your blog active, and ensure folks will continue stumbling across your API, or API service, while Google, or on social media.

Another interesting example of how to keep your blog fresh came from my partners over at Runscope, who conducted a featured guest blog post series, where they were paying API community leaders to help “create an incredible resource of blog posts about APIs, microservices, DevOps, and QA.” Which has produced a handful of interesting posts:

One thing to note is that Runscope paid $500.00 per post to help raise the bar when it comes to the type of author that will step up for such an effort. I’ve seen companies try to do this before, offering gift cards, swag, and even nothing in return, with varying grades of success and failure. I’m not saying a guest author program for your blog will always yield the results you are looking for, but it is a good way to help build relationships with your community, and help augment your existing workload, with some regular storytelling on the blog.

A guest blogger program is a tool I will be adding to my API communications research, expanding on the tools API operators have in their toolbox to keep their communication strategies active. An active blog does more than just educate your community, and boost your SEO. An active blog, that is informative, and relevant shows that there is someone home behind an API, and that they are investing in the platform. While there are exceptions, the clearest sign that an API will soon be deprecated, or does not have the resources to support consumers properly is when the blog hasn’t been updated in the last six months. While I’m reviewing, indexing, and learning about different APIs, when I come across an inactive blog, or Twitter account for an API, I’ll almost always keep moving, feeling like there really isn’t much worthwhile there, as it will soon be gone.


Communication Strategy Filler Using Sections Of Your API Documentation

</a>

Coming up with things creative things to write about regularly on the blog, and on Twitter when you are operating an API is hard. It has taken a lot of discipline to keep posts going up on API Evangelist regularly for the last seven years–totaling almost 3K total stories told so far. I don’t expect every API provider to have the same obsessive compulsive disorder that I do, so I’m always looking for innovative things that they can do to communicate with their API communities–something that Amazon Web Services is always good at providing healthy examples that I feel I can showcase.

One thing the AWS team does on a regular basis is tweeting out links to specific areas of their documentation, that helps users accomplish specific things with AWS APIs. The AWS security team is great at doing this, with recent examples focusing on securing things with the AWS Directory Service, and API Organizations. Each contains a useful description, attractive looking image, and a link to a specific page in the documentation that helps you learn more about what is possible.

I have been pushing myself to make sure all headers, and subheaders in my API documentation have anchors, so that I can not just link to a specific page, but I can link to a specific section, path, or other relevant item within my API documentation. This helps me in my storytelling when I’m looking to reference specific topics, and would help when it comes to tweeting out regular elements across my documentation in tweets. I’m slowly going to push out some of the lower grade tweets of curated news that I push out, and replace with relevant work I do in specific areas of my research–using my own work to fill the cracks over less than exciting things I may come across in the API space.

Tweeting out what is possible with your API, with links to specific sections of your API documentation is something I’m going to add to my API communication research. Providing a common building block that other API providers, or even API service providers can consider when looking for things to fill the cracks in their platform communication strategy. It is simple, useful to API consumers, and is an easy way to keep the Tweet stream regularly flowing, and helping developers understand what is possible.

How AWS Is doing it…

https://twitter.com/AWSSecurityInfo/status/868059146280550400

https://twitter.com/AWSSecurityInfo/status/867711869900935169


How API Evangelist Works

I’ve covered this topic several times before, but I figured I’d share again for folks who might have just become readers int he last year. Providing an overview of how API Evangelist works, to help eliminate confusion as you are navigating around my site, as well as to help you find what you are looking for. First, API Evangelist was started in the summer of 2010 as a research site to help me better understand what is going on in the world of APIs. In 2017, it is still a research site, but it has grown and expanded pretty dramatically into a network of websites, driven by a data and a content core.

The most import thing to remember is that all my sites run on Github, which is my workbench in the the API Evangelist workshop. apievangelist.com is the front door of the workshop, with each area of my research existing as its own Github repository, at its own subdomain with the apievangelist domain. An example of this can be found in my API design research, where you will find at design.apievangelist.com. As I do my work each day, I publish my research to each of my domains, in the form of YAML data for one of these areas:

  • Organizatons - Companies, organizations, institutions, programs, and government agencies doing anything interesting with APIs.
  • Individuals - The individual people at organizations, or independently doing anything interesting with APIs.
  • News - The interesting API related, or other news I curate and tag daily in my feed reader or as I browse the web.
  • Tools - The open source tooling I come across that I think is relevant to the API space in some way.
  • Building Blocks - The common building blocks I find across the organizations, and tooling I’m studying, showing the good and the bad of doing APIs.
  • Patents - The API related patents I harvest from the US Patent Office, showing how IP is impacting the world of APIs.

You can find the data for each of my research areas in the _ data folder for each repository. Which is then rendered as HTML for each subdomain using Liquid via each Jekyll CMS driven website. All of this is research. It isn’t meant to be perfect, or a comprehensive directory for others to use. If you find value in it–great!! However, it is just my research laying on my workbench. It will change, evolve, and be remixed and reworked as I see fit, to support my view of the API sector. You are always welcome to learn from this research, or even fork and reuse it in your own work. You are also welcome to submit pull requests to add or update content that you come across about your organization or open source tool.

The thing to remember about API Evangelist is it exist primarily for me. It is about me learning. I take what I learn and publish as blog posts to API Evangelist. This is how I work through what I’m discovering as part of my research each day, and use as a vehicle to move my understanding of APIs forward. This is where it starts getting real. After seven years of doing this I am reaching 4K to 7K page views per day, and clearly other folks find value in reading, and sifting through my research. Because of this I have four partners, 3Scale, Restlet, Runscope, and Tyk who pay me money to have their logo on the home page, in the navigation, and via a footer toolbar. Runscope also pays me to place a re-marketing tag on my site so they can show advertising to my users on other websites, and Facebook. This is how I pay my rent, an how I eat each month.

Beyond this base, I take my research and create API Evangelist industry guides. API Definitions, and API Design are the two most recent editions. I’m currently working on one for data, database, as well as deployment, and management. These guides are sometimes underwritten by my partners, but mostly they are just the end result of my API research. I also spend time and energy taking what I know and craft API strategy and white papers for clients, and occasionally I actually create APIs for people–mostly in the realm of human services, or other social good efforts. I’m not really interested in building APIs for startups, or in service of the Silicon Valley machine. Even tough I enjoy watching, studying, and learning from this world, because there are endless lessons regarding how we can use technology in this community, as well as how we should not be using technology.

That is a pretty basic walk through of API Evangelist works. It is important to remember I am doing this research for myself. To learn, and to make a living. API Evangelist is a production, a persona I created to help me wade through the technology, business, and politics of APIs. It reflects who I am, but honestly is increasingly more bullshit than it is reality, kind of like the API space. I hope you enjoy this work. I enjoy hearing from my readers, and hearing how my research impacts your world. It keeps me learning each day, and from ever having to go get a real job. It is always a work in progress and never done. Which I know frustrates some, but I find endlessly interesting, and is something that reflects the API journey, something you have to get used to if you are going to be successful doing APIs in this crazy world.


Sharing Top Sections From Your API Documentation As Part Of Your Communications Strategy

I’m always learning from the API communication practices from out of the different AWS teams. From the regular storytelling coming out of the Alexa team, to the mythical tales of leadership at AWS that have contributed to the platform’s success, the platform provides a wealth of examples that other API providers can emulate.

As I talked about last week, finding creative ways to keep publishing interesting content to your blog as part of your API evangelism and communications strategy is hard. It is something you have to work at. One way I find inspiration is by watching the API leaders, and learning from what they do. An interesting example I recently found out of the AWS security team, was their approach to showcasing the top 20 AWS IAM documentation pages so far in 2017. It is a pretty simple, yet valuable way to deliver some content for your readers, that can also help you expose the dark corners of your API documentation, and other resources on your blog.

The approach from the AWS security team is a great way to generate content without having to come up with good ideas, but also will help with your SEO, especially if you can cross publish, or promote through other channels. It’s pretty basic content stuff, that helps with your overall SEO, and if you play it right, you could also get some SMM juice by tweeting out the store, as well as maybe a handful of the top links from your list. It is pretty listicle type stuff, but honestly if you do right, it will also deliver value. These are the top answers, in a specific category, that your API consumers are looking for answers in. Helping these answers rise to the top of your blog, search engine, and social media does your consumers good, as well as your platform.

One more tool for the API communications and evangelism toolbox. Something you can pull out when you don’t have any storytelling mojo. Which is something you will need on a regular basis as an API provider, or service provider. It is one of the tricks of trade that will keep your blog flowing, you readers reading, and hopefully your valuable API, products, services, and stories floating to the top of the heap. And that is what all of this is about–staying on top of the pile, keeping things relevant, valuable, and useful. If we can’t do that, it is time to go find something else to do.


Developing The Ability To Repeat The Same API Stories Over And Over

After seven years of telling stories on API Evangelist I’ve had to repeat myself from time to time. Honestly, I repeat myself A LOT. Hopefully I do it in a way that some of you don’t notice, or at least you are good at filtering the stories you’ve already heard from your feed timeline. My primary target audience is the waves of new folks to the world of APIs I catch with the SEO net I’m casting and working on a daily basis. Secondarily, it is the API echo chamber, and folks who have been following me for a while. I try to write stories across the spectrum, speaking to the leading edge API conversations, as well as the 101 level, and everything in between.

Ask anyone doing API evangelism, advocacy, training, outreach, and leadership–and they’ll that you have to repeat yourself a lot. It is something you get pretty sick of, and if you don’t find ways to make things interesting, and change things up, you will burn out. To help tell the same story over and over I’m always looking for a slightly different angle. Let’s take API Meetups as an example. Writing a story about conducting an API Meetup has been done. Overdone. To write a new story about it I’ll evaluate what is happening at the Meetup that is different, or maybe the company, or the speaker. Diving into the background of what they are doing looking for interesting things they’ve done. You have to find an angle to wrap the boring in something of value.

API documentation is another topic I cover over, and over, and over. You can only talk about static or interactive API documentation so much. Then you move into the process behind. Maybe a list of other supporting elements like code samples, visualizations, or authentication. How was the onboarding process improved? How the open source solution behind it simplifies the process. You really have to work at this stuff. You have to explore, scratch, dig through your intended topic until you find an angle that you truly care about. Sure, it has to matter to your readers, but if you don’t care about it, the chances of writing an interesting story diminishes.

This process requires you to get to know a topic. Read other people’s writing on the topic. Study it. Spin it around. Dive into other angles like the company or people behind. Spend time learning the history of how we got here with the topic. If you do all this work, there is a greater chance you will be able to find some new angle that will be interesting. Also, when something new happens in any topical area, you have this wealth of knowledge about it, and you might find a new spark here as well. Even after all that, you still might not find what you are looking for. You still end up with many half finished stories in your notebook. It is just the way things go. It’s ok. Not everything you write has to see the light of day. Sometimes it will just be exercise for the next round of inspiration. That hard work you are experiencing to find a good story is what it takes to reach the point where you are able to discover the gems, those stories that people read, retweet, and talk about.


Concerns Around Working With The API Evangelist At Large Organizations

I know that I make some tech companies nervous. They see me as being unpredictable, with no guarantees regarding what I will say, in a world where the message should be tightly controlled. I feel it in the silence from many of the folks that are paying attention to me at large companies, and I’ve heard it specifically from some of my friends who aren’t concerned with telling me personally. These concerns keep them from working with me on storytelling projects, and prevent them from telling me stories about what is happening internally behind their firewall. It often doesn’t stop employees from telling me things off the record, but it does hinder official relationships, and on the record stories from being shared.

I just want folks to know that I’m not in the scoop, or gotcha business. I only check-in on my page views monthly to help articulate where things are with my sponsors. I’m more than happy to keep conversations off the record, anonymize sources and topics. Even the folks in the space who have pissed me off do not get directly called out by me. Well, most of them. I’ve gone after Oracle a couple of times, but they are the worst of the worst. There are other startups and bigcos who I do not like, and you don’t ever hear me talking trash about them on my blog. Most of my rants are anonymized, generalized, and I take extra care to ensure no enterprise egos, careers, or brands are hurt in the making of API Evangelist.

If you study my work, you’ll see that I talk regularly with federal government agencies, and large enterprise organizations weekly, and I never disclose things I shouldn’t be. If you find me unpredictable, I’m guessing you really haven’t been tuning into what I’ve been doing for very long, or your insecurities run deeper than anything to do with me. I’m not in the business of making folks look bad. Most of the companies who are looking bad in the API space do not need my help, they excel at doing it on their own. I’m usually just chiming in to help amplify, and use as a case study for what API providers should consider NOT DOING in their own API operations. Sure, I may call you out for your dumb patents, and the harmful acquisitions you make, but anything I rant about is going to already be public material–I NEVER do this with private conversations.

So, if you are experiencing reservations about sharing stories with me, or possibly sponsoring some storytelling on API Evangelist because you are worried about what will happen, stop fretting. If you are upfront with me, clear about what is on the record, and what is off, and honest about what you are looking to get out of the relationship, things will be fine. Even if they end up being rocky, I’m not the kind of person to call you out on the blog. I may complain, rant, and vent, but you can look through seven years of the blog and you won’t find me doing that about anyone I’ve specifically worked with on storytelling projects. I don’t always agree with why corporations, institutions, and government agencies are so controlling of the message around their API operations, but I will be respectful of any line you draw for me.


What APIs Excite Me And Fuels My Research And Writing

I am spending two days this week with the Capital One DevExchange team outside of Washington DC, and they’ve provided me with a list of questions for one of our sessions, which they will be recording for internal use. To prepare, I wanted to work through my thoughts, and make sure each of these answers were on the tip of my tongue–here is one of those questions, along with my thoughts.

The number API that gets me out of bed each day, with an opportunity to apply what I’ve learned in the API sector is with the Human Services Data API (HSDA). Which is an open API standard I am the technical lead for which helps municipalities, and human service organizations better share information that helps people find services in their communities. This begins with the basics like food, housing, and healthcare, but in recent months I’m seeing the standard get applied in disaster scenarios like Hurricane Irma to help organize shelter information. This is why I do APIs. The project is always struggling for funding, and is something I do mostly for free, with small paychecks whenever we receive grants, or find projects where we can deliver an actual API on the ground.

Next, I’d say it is government APIs at the municipal, state, but mostly at the federal levels. I was a Presidential Innovation Fellow in the Obama administration, helping federal agencies publish their open data assets, take inventory of their web services. I don’t work for the government anymore, but it doesn’t mean the work hasn’t stopped. I’m regularly working on projects to ensure RFPs, and RFIs have appropriate API language in them, and talking with agencies about their API strategy, helping educate them what is going on in the private sector, and often times even across other government agencies. APIs like the new FOIA API, Recreational Information Database API, Regulations.gov, IRS APis, and others will have the biggest impact on our economy and our lives in my opinion, so I make sure to invest a considerable amount of time here whenever I can.

After that, working with API education and awareness at higher educational institutions is one my passions and interest. My partner in crime Audrey Watters has a site called Hack Education, where she covers how technology is applied in education, so I find my work often overlapping with her efforts. A portion of these conversations involve APIs at the institutional level, and working with campus IT, but mostly it about how the Facebook, Twitter, WordPress, Dropbox, Google, and other public APIs can be used in the classroom. My partner and I are dedicated to understanding the privacy implications of technology, and how APIs can be leveraged to give students and faculty more control over their data and content. We work regularly to tell stories, give talks, and conduct workshops that help folks understand what is possible at the intersection of APIs and education.

After that, I’d say the main stream API sector keeps me interested. I’m not that interested in the whole startup game, but I do find a significant amount of inspiration from studying the API pioneers like SalesForce and Amazon, and social platforms like Twitter and Facebook. As well as the cool kids like Twilio, Stripe, and Slack. I enjoy learning from these API leaders, studying their approaches, but where I find the most value is sharing these stories with folks in SMB, SME, and the enterprise. These are the real-world stories I thrive on, and enjoy retelling as part of my work on API Evangelist. I’m a technologist, so the technology of doing APIs can be compelling, and the business of doing this has some interesting aspects, but it’s mostly the politics of doing APIs that intrigues me. This primarily involves the politics of the humans involved within a company, or industry, providing what I always find to be the biggest challenges of doing APIs.

In all of these areas, what actually gets me up each day, is being able to tell stories. I’ve written about 3,000 blog posts on API Evangelist in seven years. I work to publish 3-5 posts each weekday, with some gaps in there due to life getting in the way. I enjoy writing about what I’m learning each day, showcasing the healthy practices I find in my research, and calling out the unhealthy practices I regularly come across. This is one of the reasons I find it so hard to take a regular job in the space, as most companies are looking to impose restrictions, or editorial control over my storytelling. This is something that would lead to me not really wanting to get up each day, and is the number one reason I don’t work in government, resulting in me pushing to make change from the outside-in. Storytelling is the most important tool in my toolbox, and it should be in every API providers as well.


OpenAPI 3.0 Tooling Discovery On Github And Social Media

I’ve been setting aside time to browse through and explore tagged projects on Github each week, learning about what is new and trending out there on the Githubz. It is a great way to explore what is being built, and what is getting traction with users. You have to wade through a lot of useless stuff, but when I come across the gems it is always worth it. I’ve been providing guidance to all my customers that they should be publishing their projects to Github, as well as tagging them coherently, so that they come up as part of tagged searches via the Github website, and the API (I do a lot of discovery via the API).

When I am browsing API projects on Github I usually have a couple of orgs and users I tend to peek in on, and my friend Mike Ralphson (@PermittedSoc) is always one. Except, I usually don’t have to remember to peek in on Mike’s work, because he is really good at tagging his work, and building interesting projects, so his stuff is usually coming up as I’m browsing tags. He is the first repository I’ve come across that is organizing OpenAPI 3.0 tooling, and on his project he has some great advice for project owners: “Why not make your project discoverable by using the topic openapi3 on GitHub and using the hashtag #openapi3 on social media?” « Great advice Mike!!

As I said, I regularly monitor Github tags, and I also monitor a variety of hashtags on Twitter for API chatter. If you aren’t tagging your projects, and Tweeting them out with appropriate hashtags, the likelihood they are going to be found decreases pretty significantly. This is how Mike will find your OpenAPI 3.0 tooling for inclusion in his catalog, and it is how I will find your project for inclusion in stories via API Evangelist. It’s a pretty basic thing, but it is one that I know many of you are overlooking because you are down in the weeds working on your project, and even when you come up for air, you probably aren’t always thinking about self-promotion (you’re not a narcissist like me, or are you?)

Twitter #hashtags has long been a discovery mechanism on social media, but the tagging on Github is quickly picking up steam when it comes to coding project discovery. Also, with the myriad of ways in which Github repos are being used beyond code, Github tagging makes it a discovery tool in general. When you consider how API providers are publishing their API portals, documentation, SDKs, definitions, schema, guides, and much more, it makes Github one of the most important API discovery tools out there, moving well beyond what ProgrammableWeb or Google brings to the table. I’ll continue to turn up the volume on what is possible with Github, as it is no secret that I’m a fan. Everything I do runs on Github, from my website, to my APIs, and supporting tooling–making it a pretty critical part of what I do in the API sector.


I Wish I Had Time To Tell That API Story

If you have followed my work in the API space you know that I consider myself an API storyteller before I ever would an API evangelist, architect, or the other skills I bring to the table. Telling stories about what folks are up to in the space is the most important thing to me, and I feel it is the most common thing people stumble across, and end up associating with my brand. You hear me talk regularly about how important stories are, and how all of this API thing is only a thing, because of stories. Really, telling stories is the most important you should be doing if you are an API provider or API service provider, and something you need to be prioritizing.

I was talking with a friend, and client the other day about their API operations, and after they told me a great story about the impact their APIs were making I said, “you should tell that story”! Which they responded, “I wish I had time to tell that story, but I don’t. My boss doesn’t prioritize me spending time on telling stories about what we are doing.” ;-( It just broke my heart. I get really, really busy during the week with phone calls, social media, and other project related activity. However, I always will stop what I’m doing and write 3-5 blog posts for API Evangelist about what I’m doing, and what I’m seeing. I know many of the stories are mundane and probably pretty boring, but they are exercise for me, of my ideas, my words, and how I communicate with other people.

The way that enterprise groups and startups operate is something I’m very familiar with. I’ve been scolded by many bosses, and told not read or write on my blog. This is one of the reasons I don’t work in government anymore, or in the enterprise, as it would KILL ME to not be able to tell stories. I need storytelling to do what I do. To work through ideas. It is how I learn from others. Why would I want to do something that I can’t tell others about? Why would I not prioritize the cool things my clients are doing with my APIs? Sure, there are some classified, and sensitive situations where you definitely would not, but most of the reasons I hear for not telling stories publicly about the cool things you are doing are complete bullshit. I’m sorry, but they are. Even if you have to package it as a white paper or case study, you should be putting this down for others to learn from.

When you find yourself telling your creative side (or me), that you wish you had time to tell that story, you should consider that a canary in the coal mine. A sign that there is other illnesses going on. Sure, once or twice is fine, but if this becomes a sustained thing, or worse–you stop wanting to tell stories at all, then you should be looking for a new gig. You just had your mojo killed. Nobody deserves that. No employer should kills their employees storytelling mojo. Even if you are all business, telling stories is essential to making things work. Press releases are stories. Ok, they are usually a very sad, pathetic story, but they are a story. Your company blog should be active. Your personal blog should be active. Go check out your personal blog, when was the last time you wrote something you were passionate about? If it was more than a year ago, your employer has put you in a box, and is looking to keep you there.

Photo Credit: The Boyhood of Raleigh by Sir John Everett Millais, oil on canvas, 1870. A seafarer tells the young Sir Walter Raleigh and his brother the story of what happened out at sea, from the Wikipedia entry for storytelling.


The Fact That You Do Not Know Who I Am Shows You Live In A Silo

Don’t who know who I am? I am the API Evangelist. Ok, I know this post is dripping with ego. However, it is the last post in my week of API rants, and I’m just pumped from writing all of these. These types of posts are so easy to write because I don’t have to do any research, and real work, I just write, putting my mad skills at whitesplaining and mansplaining to work–tapping into my privilege. So I’m going to end the week with a bang, and fully channel the ego that has developed along with the persona that is API Evangelist.

However, there is a touch of truth to this. If you are operating an API today, and you do not know who I am, I’m just going to put it out there–you live in a silo. I have published around 3,000 blog posts since 2010 on APIs. I’m publishing 3-5 posts a day, and have consistently done so for seven years. There are definitely some major gaps in that, but my SEO placement is pretty damn good. You type API or APIs, and I’m in the top 30 usually, with the occasional popping up on home page. The number thing I get from folks who message me is that they can’t search for anything API without coming across one of my posts, so they want to talk to me. So why is it that you do not know who I am? I have some ideas on that.

It is because you do not read much outside your silo. When you do, you don’t give any credit to authorship. So when you have read any of my posts you didn’t associate them with a person named Kin Lane. You operate within your silo 98% of the time, and the 2% you get out, you really don’t read much, or learn from others. I on the other hand spend 98% of my time studying what others are doing, and 2% hiding away. My goal is to share this with you. I’d say 75% of my work is just referencing and building on the work of others, only 25% is of my own creation. I’m putting all of this out there for you, and you don’t even know it exists. What does that tell you about your information diet? It tells me that you probably aren’t getting enough exercise and nutrients as part of your regular daily intake, that will make your API operations be less healthy and strong–reading is good for healthy bones girls and boys!

Kin Lane doesn’t have this much ego, but API Evangelist does. The fact that you don’t know who I am shows you aren’t spending enough time studying the API space before launching an API. I’m hoping that you in your API journey you learn more about the importance of coming out of your bubble, and learning from your community, and the wider API community. It is why we do APIs, and why APIs work (when they do). I wrote this title to be provocative and part of my week of rants, but honestly it is true. If you haven’t come across one of my API posts, and stumbled on my blog at some point you should probably think about why this is. The most successful API providers and evangelists I know are tuned into their communities, industries, and wider API space, and are familiar with my work–even if they don’t all like me. ;-)

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


The Importance Of API Stories

I am an API storyteller before am an API architect, designer, or evangelist. My number one job is to tell stories about the API space. I make sure there is always (almost) 3-5 stories a day published to API Evangelist about what I’m seeing as I conduct my research on the sector, and thoughts I’m having while consulting and working on API projects. I’ve been telling stories like this for seven years, which has proven to me how much stories matter in the world of technology, and the worlds that it is impacting–which is pretty much everything right now.

Occasionally I get folks who like to criticize what I do, making sure I know that stories don’t matter. That nobody in the enterprise or startups care about stories. Results are what matter. Ohhhhh reeeaaaly. ;-) I hate to tell you, it is all stories. VC investment in startups is all about the story. The markets all operate on stories. Twitter. Facebook. LinkedIn. Medium. TechCrunch. It is all stories. The stories we tell ourselves. The stories we tell each other. The stories we believe. The stories we refuse to believe. It is all stories. Stories are important to everything.

The mythical story about Jeff Bezos’s mandate that all employees needed to use APIs internally is still 2-3% of my monthly traffic, down from 5-8% for the last couple of years, and it was written in 2012 (five years ago). I’ve seen this story on the home page of the GSA internal portal, and framed hanging on the wall in a bank in Amsterdam. Stories are important. Stories are still important when they aren’t true, or partially true, like the Amazon mythical tale is(n’t). Stories are how we make sense of all this abstract stuff, and turn it into relatable concepts that we can use within the constructs of our own worlds. Stories are how the cloud became a thing. Stories are why microservices and devops is becoming a thing. Stories are how GraphQL wants to be a thing.

For me, most importantly, telling stories is how I make sense of the world. If I can’t communicate something to you here on API Evangelist, it isn’t filed away in my mental filing cabinet. Telling stories is how I have made sense of the API space. If I can’t articulate a coherent story around API related technology, and it just doesn’t make sense to me, it probably won’t stick around in my storytelling, research, and consulting strategy. Stories are everything to me. If they aren’t to you, it’s probably because you are more on the receiving end of stories, and not really influencing those around you in your community, and workplace. Stories are important. Whether you want to admit it or not.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.