• Category Archives: blog

8% of pull requests are doomed

Today we’ll look at three terminal pull request outcomes and one way to increase velocity in
your engineering process.

Every pull request has one of three outcomes

Every pull request has costs: engineering labor, product management, and
opportunity cost, to name a few. Each also has an outcome: merged, closed
without merging, or abandoned due to inactivity.

Here’s a look at how pull requests fare across the industry:

If you group closed and inactive pull requests together (“Abandoned PRs”), you
can estimate that the average engineer abandons 8% of the pull requests they
create, which is equivalent to a loss of $24,000 per year1, or the cost of a
2018 Toyota Camry Hybrid

(We consider pull requests that have had zero activity for more than three days
to be abandoned because our data shows a very low likelihood that PRs that go
untouched for so long get merged later.)

Achieving zero abandoned pull requests is an anti-goal, as it would require being
extremely conservative when opening them. However, a high rate of abandoned PRs can
indicate inefficiency and opportunity for improvement within an engineering
process. Reducing PR loss by 20% on a team with 10 engineers could save $48,000
per year.

How does my team stack up?

Using an anonymized, aggregated analysis of thousands of engineering
contributors, we’re able to get an understanding of how an engineering
organization compares to others in the industry:

This density plot shows that the average pull request loss rate across our
dataset is 8% (with a median of 6%). A loss rate above 11% would be in the
bottom quartile, and a loss rate below 3% would be upper quartile performance.

Improving pull request outcomes

Abandoned pull requests are, of course, a lagging indicator. You can tell because it
would be ridiculous to go to an engineering team and say, “All those PRs that
you’re closing… merge them instead!”

Potential drivers lie upstream: late changing product requirements, shifting
business priorities, unclear architectural direction and good ole’ fashioned
technical debt. If you have an issue with abandoned pull requests, soliciting
qualitative feedback is a great next step. Talk to your team. Identify something
that is impacting them and talk about how you might avoid it next time. Then,
rather than focus on the absolute value of your starting point, you can monitor
that your abandonment rate is going down over time.

After all, you’d probably rather not send a brand new Camry to the scrap yard
every year.

1 Assumes a fully loaded annual cost of $300k per developer.

Read more at the source

Velocity is out of beta

Our mission at Code Climate is to help engineering organizations improve their processes, teams and code. We see a future where everyone from individual developers up to the CTO has access to a full picture of their engineering work in the form of clear, timely and actionable quantitative data.

In February, we opened our Velocity public beta. Over the past five months, we’ve spoken with hundreds of engineering leaders, processed a nearly-overwhelming amount of product feedback, and added dozens of top-requested features.

We’ve been floored by the excitement from engineering leaders:

“If you haven’t tried @codeclimate’s new Velocity product, and you’re interested in non-vanity measurements of productivity, and a baseline from which to measure process improvements, try it now. It’s very exciting.”

– Avi Flombaum, Dean and Chief Product Officer, Flatiron School

“Velocity is quickly becoming one of my favorite tools for engineering management.”

– Tomas Becklin, VP of Engineering, DroneBase

Today, Velocity is launching out of beta, and we’re ready to help your engineering organization turn on the lights.

Click here to book a Velocity demo today.

Everyone who books a demo before Thursday, July 26th will receive our introductory launch pricing of 20% off for life. This is a one-time offer that we won’t be repeating anytime soon.

Still on the fence? Keep reading.

Most engineering decisions are anecdote-driven

Today, engineering organizations are often forced to make decisions based solely on anecdotes, gut feel and incomplete information. We understand that qualitative information is highly valuable – there’s no substitute for experience and intuition. However, the lack of quantitative data within engineering processes is a missed opportunity, especially given how data has transformed DevOps.

Historically, engineering organizations looking to incorporate data into their processes have faced two problems.

First, unless they’re working within a behemoth like Google, there simply aren’t enough developer resources to spare to invest in such efforts. This is the problem of “The Cobbler’s children having no shoes,” as analytics has transformed so many departments like sales, marketing, and finance.

Second, even if metrics were available, they would be hard to interpret. After all, if someone told you that your team averages 1.9 review cycles per pull request, is that the best as you could reasonably aim for or an opportunity for improvement?

Get data-driven with Velocity

Velocity helps you unlock the full potential of your engineering organization with data-driven insights to manage risks, eliminate bottlenecks, and drive continuous improvement.

It’s built on one simple notion: The happiest developers work on the most productive teams, and vice versa. Applying these practices, which we call Data-Driven Engineering, puts you in the position to achieve both.

Velocity gives you:

  • Custom dashboards and trends – engineering metrics with full historical trends
  • Team insights – actionable data to level up your engineering teams
  • Industry benchmarks – high impact opportunities for improvement by comparing your metrics against other engineering teams
  • Real-time risk alerts – identify and resolve risks before they become problems.

As a software company ourselves, we’re committed to improving the process of engineering, for everyone involved: developers, product managers, executives, and more. Velocity is a core part of our foundation to pursue this goal. If you’re excited about this prospect as well, check out Velocity today:

Click here to book a Velocity demo today.Act by July 26th and get 20% off for life.

It takes 10 minutes to set up by connecting to your GitHub (or GitHub Enterprise) account, and soon you’ll have dozens of reports (with full historical data) to easily identify risks and opportunities.

Onward, to a data-driven future!

-Bryan, Noah and the entire Code Climate team

Read more at the source

Turning on the lights

Welcome to the first installment of Code Climate’s new “Data-Driven
Engineering” series. Since 2011, we’ve been helping thousands of engineering
organizations unlock their full potential. Recently, we’ve been distilling that
work into one unified theme: Data-Driven Engineering.

What’s Data-Driven Engineering?

Data-Driven Engineering applies quantitative data to improve processes, teams,
and code. Importantly, Data-Driven Engineering is not:

  • Ignoring qualitative data you don’t agree with
  • Replacing collaboration and conversations
  • Stack ranking or micromanaging developers

Why is this important?

Data-Driven Engineering offers significant advantages compared to
narrative-driven approaches. It allows you to get a full picture of your
engineering process, receive actionable feedback in real-time, and identify
opportunities for improvement through benchmarking. Most importantly,
quantitative data helps illuminate cognitive biases, of which there are many.

What can Data-Driven Engineering tell us?

After analyzing our anonymized, aggregated data set including thousands of
engineering organizations, the short answer is: a lot.

Over the coming weeks, we’ll explore unique and practical insights to help you
transform your organization. We’ll share industry benchmarks for critical
engineering velocity drivers to help our readers identify process improvement
opportunities. Here’s an example:

Pull requests merged per week (PR throughput) per contributor1

This plot shows that an average engineer merges 3.6 pull requests per week, and
a throughput above 5.2 PRs merged per week is in the upper quartile
of our
industry benchmark.

You might be thinking, “Why do some engineers merge almost 50% more than their
peers?”… and that’s exactly the type of questions Data-Driven Engineering can
help answer.

1 We included contributors who average 3+ coding days per week from
commit timestamps.

Read more at the source

Launching Today: Velocity

Data-driven insights to boost your engineering capacity

Today we’re sharing something big: Velocity by Code Climate, our first new product since 2011, is launching in open beta.

Velocity helps organizations increase their engineering capacity by identifying bottlenecks, improving day-to-day developer experience, and coaching teams with data-driven insights, not just anecdotes.

Velocity helps you answer questions like:

  • Which pull requests are high risk and why? (Find out right away, not days later.)
  • How does my team’s KPIs compare to industry averages? Where’s our biggest opportunity to improve?
  • Are our engineering process changes making a difference? (Looking at both quantity and quality of output.)
  • Where do our developers get held up? Do they spend more time waiting on code review or CI results?

Learn more about Velocity

Why launch a new product?

Velocity goes hand-in-hand with our code quality product to help us deliver on our ultimate mission: Superpowers for Engineering Teams. One of our early users noted:

“With Velocity, I’m able to take engineering conversations that previously hinged on gut feel and enrich them with concrete and quantifiable evidence. Now, when decisions are made, we can track their impact on the team based on agreed upon metrics.” – Andrew Fader, VP Engineering, Publicis

Get started today

We’d love to help you level up your engineering organization. Request a free trial and we’ll be in touch right away. As a special thank you for our early supporters, anyone who begins a free, 14-day trial before Friday, February 16th will get 20% off their first year.

Read more at the source

How Codecademy achieves rapid growth and maintainable code

We sat down with Jake Hiller, Head of Engineering at Codecademy, to find out how they use Code Climate to maintain their quality standards while rapidly growing their engineering team.

Codecademy Logo

Manhattan, NY
Ruby, JavaScript, SCSS
Since May 2013

Code Climate keeps our process for creating PRs really low-effort so we can quickly test ideas and ship sooner.

Why Code Climate

Like many rapidly developing teams, Codecademy was running into growing pains for both engineering onboarding and code review. They had tried using local analysis tools but found them cumbersome to integrate as development environments varied across the team.

With an engineering workflow centered around pull request reviews, and a desire to reduce friction in committing and testing code, they needed a solution that would optimize their pull request review process and enable new team members to quickly become productive.

Codecademy had been using Code Climate for their Ruby stack since 2013. When Head of Engineering, Jake Hiller, joined in early 2015, he saw an opportunity to alleviate their code review and onboarding issues by rolling it out to the whole team.

“We wanted to avoid anything that blocks engineers from committing and testing code. Other solutions that use pre-commit hooks are invasive to both experimentation and the creative process. Code Climate’s flexibility helps us maintain rules that are tailored to our team and codebase, while offering standard maintainability measurements. Plus it enables us to defer checks until code is ready to be reviewed, so we can quickly test ideas and ship sooner.”

“Code Climate helps us transfer knowledge to new engineers – like our coding standards, why we’ve made decisions over time, and why we’ve chosen certain structures and patterns.

Increased speed and quality

Since rolling out to the whole team, Hiller says Codecademy has seen an improvement in the quality of their code reviews and the ease with which new team members get up to speed.

“Code Climate helps us transfer knowledge to new engineers – like our coding standards, why we’ve made decisions over time, and why we’ve chosen certain structures and patterns. New engineers can look through the Code Climate issues in their PR, ask questions, and propose changes and suggestions to the team.

“It’s also increased the speed and quality of our pull request reviews. We’ve been able to spend more time discussing the important functional aspects of our code, and less time debating smaller issues. There are a lot of issues that can’t be fixed with an auto formatter, which is where Code Climate will always be really helpful for our team.”

About Codecademy

Codecademy was founded in 2011 as an immersive online platform for learning to code in a fun, interactive, and accessible way. They’ve helped 45 million people learn how to code, covering a wide variety of programming languages, frameworks, and larger topics like Data Analysis and Web Development. Their recently released Pro and Pro Intensive products provide users with more hands on support and practice material to help them learn the skills they need to find jobs.

Read more at the source

The Customer Gap

In July, we hosted the first annual Code Climate Summit, a one-day
conference for leaders of engineering organizations who want to better
themselves, their processes, and their teams.

Today we’re sharing The Customer Gap, presented by Code Climate’s Director
of Engineering, Gordon Diggs, and Code Climate’s Customer Support
Lead, Abby Armada. In this talk, you will hear about how we’ve fostered
a positive, productive relationship between customer support and engineering. By
creating processes, and encouraging cross-team collaboration, we can combine our
strengths to best focus on customers of current and future products.


Transcript of talk

Abby Armada: Before we get into this, I wanted to highlight this customer quote, and following exchange between our customer support and engineering teams. So: “Awesome dedication by the engineering and customer support team. Thanks, Jenna, Abby, and Ashley.” And there’s Jenna, who’s on customer support, saying, “You da best.” And Ashley saying, “No, you da best.” So, you can see this is the kind of vibe your team can have by closing the customer gap.

I’m Abby. I’m the customer support lead here at Code Climate. And including me our awesome team is three people, one being remote. I’ve been at Code Climate for over a year and in various customer-facing roles for about 13 years. I really like running and I really like eat tacos, but not at the same time.

Gordon Diggs: Hi, I’m Gordon. I’m a record collector and I like to cook lasagna. I do sometimes do those two things at the same time. I lead Code Climate’s engineering team and I’m responsible for growing the team in both size, in terms of hiring and, for lack of a better word, “mentality” in terms of our processes and the kinds of ways that we work on things.

Engineering at Code Climate takes up about 40% of the entire company. So, we’re a sizeable department. And Abby and I have actually been discussing and collaborating on the support and engineering relationship for over four years. So we’re really excited to sort of have this culmination of a lot of the stuff that we’ve talked about for so long and share some thoughts with you.

Many companies list customer focus as a core value, but few engineers spend time talking to their customers.

When you think of customer focused companies, you may think of companies like Amazon, where everyone trains in support center and answers calls, or Apple where the CEO regularly works on the support team. And so, many companies list customer focus as a core value. But few engineers spend time actually talking to their customers. So you have the big companies like Amazon and Apple, but many smaller companies that we found also list customer focus as a core value. A lot of people want to do this. They want to be a customer focused company, but it’s really hard and it’s time consuming.

The customer gap: The delta between a stated customer focus and the reality of how engineering spends its time.

And so we coined this term “the customer gap.” We define the customer gap as the delta between the stated customer focus and the reality of how engineering spends its time. Engineering tends to be very separated from our customers. But there are also gaps between your strictly customer-facing departments and your engineering team. If you can close the gap between engineering and those other departments, you can also work to close the customer gap.

Over the years we’ve talked extensively about the gap between customer support and engineering and we think that in order to close that gap, there are things engineering should do, things customer support can do, and things that we should do together.

As a little bit of a visual aid, I made this kind of cheesy chart. But you can sort of see the idea here. You have your customers on the left. You have a small gap between them and your strictly customer-facing departments. And then a wider gap between those departments and your engineering team. And so, one of the things that this shows is that your customer-facing departments are doing a better job of closing the customer gap than you are already, and that’s why they’re closer to the users. And then there’s this larger space between them and the engineering. So, if you can work on closing this gap on the right, then you can really work on closing the greater gap.

Abby Armada: Here’s a brief outline of what we’re going to cover in this talk. We’ll start by talking about ways to build customer empathy within your engineering team. And next, we’ll talk about some examples of processes customer support can implement to work more closely with engineering. And lastly, we’ll talk about closing gaps between engineering and other departments.

Building customer empathy within engineering

Gordon Diggs: Building customer empathy within engineering. This is a really important part of the formula because if your engineers don’t understand your users or their experience, or aren’t bought into this idea of customer empathy and focus, it’ll be a lot harder to close the gap. Ultimately, you really need to get them bought in.

And if I’m being honest, we at Code Climate have this kind of easy. Our customers are software engineers so it’s not a big gap for us, as engineers, to understand our users and their motivations. If we were some kind of printed product, a fashion company, or some kind of feminine care start up, we couldn’t be sure that 100% of our team has something in common with our users and understands them.

So, we can go to conferences, and we can go to meetups, and we can meet people who use our product or who want to use it or who we think should use it because it would really help their workflows. So, this is just a little bit easier for us. But it is also a little bit of a trap. Software engineers are a very diverse group of people and we need to make sure that we don’t project our own biases and assumptions when thinking about our users.

For a few examples: We at Code Climate are a team of Ruby engineers. We work at a small company. We have a good continuous deployment pipeline, and we generally have good coding practices as the nature of the work that we do. But people who buy and use Code Climate come from wildly different perspectives. A lot of them work on very, very large teams, and with tools that we don’t use and understand as well as our own. So while we can get pretty far working with our idea of software engineering, we need to be aware of where that ends and where our customer’s experience starts.

So, it’s really important that customer focus and empathy is a cultural driver. And you need to build that empathy into your engineering culture. It’s not enough to only think about focusing on your customers when things are broken or when your site is down. And so, what does that look like? What does it look like to build empathy into your engineering culture?

First, make your engineers talk to your customers on a regular basis. Include them in customer site visits. Put them on your sales calls. Make them work with support on debugging customer issues. And send them to conferences or meetups or if there’s some kind of trade show that’s relevant to your company, send them to that and put them at the table that you’re sponsoring.

Secondly, we put every mention of Code Climate on Twitter into a Slack channel. So, good and bad, engineers can see what people are saying about Code Climate online. If we ship something and it’s a poor user experience, or the CTA was a really weird color or something and people start complaining about that, we’re going to see it. And similarly, when an engineer ships something that really resonates with a group of users and they start talking about it online, there’s a nice boost there. So, you can kind of get both sides of the coin there.

We also put engineers on the frontline of responding to our community in both our community slack group and on our open source repositories. If you open an issue or a pull request on one of our repos, it doesn’t go to a member of our support team. It goes directly to an engineer who’s responsible for triaging that issue and finding the solution.

Our engineers also run our status page. If our site goes down or our service is in any way degraded, they’re the ones who are responsible for keeping our users up to date about what’s going on. Obviously, they collaborate with marketing and customer support on these updates to make sure that they are of the highest quality that they can be, but ultimately, the timing and the content is up to them.

And lastly, watch your users use the product. One issue that we have as engineers, particularly product engineers is that we know the code that goes into the site and we know that if a page is slow or a user experience is weird, it’s often because the code behind it is complicated. And we bring this bias into using our own product that, “Well, this page is slow because there’s all this nested logic in the templates and stuff.” But your users don’t understand that. They don’t have any of that. And watching them interact with your product, watching the way that they use it, and where they get frustrated, will really help eradicate some of that handwaviness.

So, as with most cultural efforts, building empathy is harder on bigger teams.

Panna actually touched on this a little bit earlier. So, if you can start when your team is small, that’s better. The bigger your team is, the more people you need to convince to buy in to this idea of customer focus and empathy. So, if you’re at a small company, that’s great. You’re in a really prime position to effect change across your team. If you’re at a larger company, all hope is not lost. You can start with a small subset of your team, find a pilot team, and have engineering managers of that team spread their success laterally throughout the organization.

And it’s really important, I really want to mention that it’s not enough to silo this empathy. You can do all you want to build customer focus within your engineering team, but it’s also important to work closely with your support team. And Abby’s going to talk a little bit about some of the ways to do that.

Helping each other

Abby Armada: So, you have to work together. There are a couple of solutions we created here at Code Climate to help both the engineering and support teams be successful at this. These solutions may seem obvious in some ways but the execution and upkeep is essential for it to work well and close the gap, which in turn closes the other gap that exists between engineering and your customers.

The first of these solutions on our side was addressing escalations, which is when a customer’s problem has gone past a scope of troubleshooting and knowledge of the support team and requires a solution from an engineer or someone else. I’m sure many of you here have had to deal with customer escalations in one way or another, and it might have been a thorn in your side. We did a lot of work to improve this process for both of our teams, which ultimately has made our customers way happier.

Our main channel of support is email, then it trickles through to Twitter, Slack, and sometimes GitHub Issues. This is where all of our escalations come from.

So, how did we address escalations before? Engineers worked on escalations on a weekly rotation. This wasn’t ideal for anyone. For the incoming engineer, there wasn’t a lot of context for existing open issues. Plus, it seemed like a giant chore that was an interruption to their regular work. On support’s side, it’s hard to ramp up someone for this work, and it’s especially difficult to form a good working relationship with your new engineer for only a one-week rotation.

We didn’t have a great way of tracking this work, or a consistent process for handing off open issues to the new engineer, or giving enough context for the problems. Prioritizing issues did not exist at all, and we kind of thought they would just figure it out. This led to everyone drowning in a sea of confusion and sadness.

To fix this we came up with a couple of solutions. Instead of a weekly rotation, we now have a dedicated support engineer for a quarter. This solved that problem of feeling like they’re interrupting other work and gives the engineer a chance to fully work and focus on escalations. It also builds a great rapport between engineering and support because we get to know each other for a quarter.

And we broke down our escalations by severity, which gives us an actual prioritization system that makes sense to both teams. There are now four levels of severity, one being the highest and four being the lowest. This isn’t a new concept by any means, but implementing something concrete was the most important part of the solution.

We also documented how to respond to each severity, both as a support person, and as an engineer, and holistically as an organization. This is written in our company handbook so everyone can see and have the knowledge. We also have concrete examples in the doc for easy reference. So, if someone is confused, they can look at the doc examples and know how to assign an issue a severity.

And lastly, we started using a GitHub repo and issues to track and update customer escalations. Everyone in the company already knows how to use GitHub, so implementing this part was the easiest.

This is an example of our severity documentation. You don’t need to read the whole thing – and the people in the back probably can’t anyway – but you can see the structure of how we define severities. In this case, this is a severity three, normal/minor impact. It’s something that’s like a moderate loss of application functionality that doesn’t really affect any of their other workflows. It matches the description. And then the examples live underneath. In this case, a customer reports a bug in an engine that’s kind of broken but doesn’t have any effect anything else that they’re doing. And then the response plan underneath details what both the customer support person and the engineers can do to sort of solve this issue.

Another change is that severity ones are treated differently than other severities within our organization. A severity one is a major issue. So, for example, if a customer’s instance of Code Climate Enterprise is down, we flag it as severity one and it requires all hands on deck to fix.

In past quarters, we found that the burden of solving these types of escalations is hard for a single escalations engineer. They often had to ask for extra help anyway. So we changed severity ones to be treated as production incidents. Support escalates directly to our engineering teams existing PagerDuty rotation. We have a fairly robust alerting pipeline for codeclimate.com, so most issues are caught by other alerts before they even get to support. It was pretty easy for us to integrate these other issues into that rotation. Distributing the work amongst those who are already on call helps solve severity ones more quickly and taps into existing expertise.

After adopting the aforementioned quarterly rotation, it helped highlight gaps in our own team’s knowledge about troubleshooting in other parts of the product. Thus, engineering has started leading proactive education workshops to teach concepts that will help us troubleshoot future issues. For instance, one of our support engineers noticed that we were pretty much escalating every single issue that had to do with our enterprise product. He set up a workshop to talk through troubleshooting concepts and how engineering looks at the same issues. This helped the support team work through more enterprise issues and lessen those types of escalations.

Engineers really benefit from this, too. Having them explain their work and their contributions to the product helps them grow and build empathy, and again, really strengthens that great rapport between your support and engineering teams. And informing the support team empowered us to troubleshoot and triage more effectively and even nipped future escalations in the bud.

Here’s our resolution time for escalations per month. So, after a slight rise due to those new processes, especially proactive education, we were able to have a fast mean time to resolution to customers than before and our customers, obviously, are really happy about that. And here are our escalations per month. You can see the tangible benefits of when we adopted these new processes and it’s because of that quarterly rotation, better processes around triaging, and proactive education. On average, our monthly escalations have gone down month to month, and that’s great.

Having worked on our escalation process, the next thing we did was take a look at the product engineering and support relationship. It’s a common problem that support doesn’t have the full picture of what’s being developed. I hear this all the time from other support professionals that I interact with every day, and this is a problem for everyone.

Even on a small team, communication is oversaturated. There are too many Slack posts and GitHub issues, and this is not sustainable at all. There’s just too much to keep up with. Last quarter, we established a weekly customer support and product meeting to sync up about work being done that week, as well as talk about things coming through the pipeline. Instead of trying to keep up with endless notifications, now we talk face-to-face and it’s much easier.

Personally, this is my favorite meeting I have every week because it’s really productive for my team and the product team, and it really gives my team great perspective on what’s happening in our organization. So, we talk about work in progress. What’s the progress on last week? What’s happening this week? We cover any upcoming releases. We get to answer the question of what exactly are we shipping and when? And how does it affect our customers? Then we talk about any customer-facing communications needed like updating docs, release notes, as well as inquiry response to incoming new customer questions. And lastly, we talk about feature requests. I’ll cover that in a little bit more detail in a bit.

This has helped our teams a lot. I share the most important knowledge back to my team since one of my people is remote, it’s good that we can be all on the same page about what’s happening that week. And it also helps us action any internal work that helps that product work. And it has also stopped confusion about what exactly is being released. We’re more confident and thus can help our customers better. And lastly, it fosters trust, again, between support and product. We know what’s going to be released and how to handle it.

Now that we have that open channel of communication with product and engineering, we decided to tackle the beast; feature requests. We all want to be the type of company that welcomes customer feedback, and feature requests, and actually actions on it. But I think this is the hardest gap to close but we’ve made a lot of steps here at Code Climate to do so.

Before, support had ways to catalog feature requests, but no way to surface them in a meaningful way for people to action them. So, we had a very bad GitHub repo, then a very bad Trello board, and – hey, Gordon, did you look at any of those ever?

Gordon Diggs: Ehhhhhhh…

Abby Armada: Yeah, me neither. There’s a limited process for both of the repos and the Trello board, which also housed bugs alongside feedback. Someone would make an issue, which would live on in perpetuity. And every time a request came up again, someone would comment on it and push it to the top. It sounds good in theory, but no one ever looked at them. Just having the tools isn’t enough. We thought that switching to Trello would solve the problem of stale, sad feature requests. But, in fact, you just needed a process.

The more you can get feedback into the eyeballs of your company, the better. You have to be loud to get this actioned.

We created a feedback repo on GitHub, again, which feeds into a Slack channel, then we talk about important feedback during that product meeting I mentioned before. The more you can get feedback into the eyeballs of your company, the better. You have to be loud to get this actioned.

This is what that GitHub repo looks like. You can see the different issues opened by the people on our team. And we use labels to easily identify the status of each feature request. You can see stuff that’s been actioned, what needs to be reviewed, and the different parts of the site that the feature request is about. And now these are pure feature requests. They are not bugs or anything pertaining to escalations. These are all nice to haves and legitimate product feedback from our customers.

Of note, this repo is completely internal. There’s no way for a customer to directly add feedback to this repo. Only people at Code Climate can do this. And then the feedback is curated by me. We have an issues template that asks a bunch of relevant questions. If the feedback doesn’t clearly answer why a customer is asking for a feature or isn’t detailed enough, it gets rejected. This curation process keeps the board fresh and keeps my finger on the pulse of what our customers want.

This is that Slack channel with that piped in feedback activity. It has all new issues and comments on old issues. And they all get fed into this channel that anyone in our company can join. So, in this screenshot, Jenna opened an issue, pinged Noah to get his thoughts, which he then shared to the issue itself. Loud is good.

As I mentioned, we also talk about the feedback issues in that weekly product and customer support meeting and see if they’re still relevant and see where they fit within our product roadmap. Sometimes feedback and feature requests can alter a product roadmap, giving us ideas for something we didn’t even consider before. And through all of this, we’ve seen an immense improvement in adoption of customer feedback within product development. In fact, after we adopted these processes, 25% of feature requests were actioned by our product team, which is a huge improvement from basically zero.

You’ve heard me talk a lot about what we did at Code Climate to close the gaps between engineering and support, but these solutions might not necessarily work for your specific teams or solve the types of problems you’re facing between your customers, support, and engineering team. As an engineering manager, it’s up to you to try and work in tandem with your support team to solve these problems. Collaboration, communication, and iterating on processes together are the key ways to do this and it leads to happier customers overall.

Closing the gap to other departments

Gordon Diggs: We’ve talked about how to build customer empathy and we’ve talked about some of the processes that support and engineering have worked on together. But what about the other customer-facing departments in our organizations? Your customers move through life cycles, and engineering should follow them.

Before your customers are even your customers, they interact with your marketing and with your marketing team’s efforts. At Code Climate, we dedicated an engineer for a whole quarter to help build automation for our marketing lead pipeline. He did this by supplementing leads with data from a variety of sources and piping it all into our CRM. He learned about what makes leads more qualified from a marketing perspective and our pipeline is fuller and faster than it’s ever been. He also learned a lot about our user personas and our segments. And this ties back to that idea of the diversity of our users from before. He came back from the marketing team with a better understanding of our users and where they come from. Dedicating a full-time engineer to this rather than hiring a contractor or farming out to buying some product, ensures that we built this in the right way and in a way that we can easily maintain and extend moving forward.

Sales is a really important part of most businesses, but particularly at a SaaS product we need to have a sales team. And sales people usually aren’t engineers and may not be able to articulate the same engineering concepts. But they are very good at learning and they can really learn from your engineers. And there are a few ways that engineers can get involved with your sales department.

The first and maybe most obvious of these is building features for customers who are in your sales pipeline. Putting engineers on your sales team means that you can more quickly action the high value projects that will sign customers right away. So, we identified three key features blocking sales and then we built them. And additionally, in just having the conversation about what features are the users that you’re talking to, what features do they really want? In just having that conversation, we clarified our product direction and have a clearer idea of where we need to go.

There’s a very, very important note and caveat to this. Some features will be too big for this setup and some will be in conflict with your existing product roadmap. I’m not suggesting that you give your sales people carte blanche to implement features in your product. But, working through the discussion of the features that they want to build and finding the compromises will really help grow your business. It’ll help clarify your roadmap, and it’ll sign customers, which is really good.

So, another sales engineering activity is reviewing leads technically. This is a little bit specific to Code Climate as a highly technical product, but we have an on-premise enterprise product and we want to be sure that we only deploy it to platforms and customers that will be successful. Doing this once a customer is about to sign a contract causes the sales process to lose momentum, but if you review the technical requirements – for us that’s things like virtualization platform, version control system, programming languages, that kind of stuff – if you review that earlier in the process, your leads will be more successful.

Pulling engineers in to do this on an ad hoc basis was something that we used to do like, “Oh, yeah, just go grab someone off of product to review one of these leads.” But it meant more prep, more interruption, and generally lower quality reviews. So having people on your sales team ready to do those reviews is a really big strength.

Similarly, installing and setting up the product with customers gives engineers a really good sense of what your onboarding experience is like. In many cases, it’s been years since one of your engineers signed up for your product and, most likely, they’ve never done that with money on the line. So, having them sit down and understand what the customer is going through helps them build it better in the future and it helps get the customers up to speed faster. This also ties back to that idea that I was talking about earlier of watching your users use the product. This will really help eradicate some of those unconscious biases that your engineers bring to your onboarding experience.

Lastly, putting customers in touch with your engineers directly build rapport. It’s rare that you can jump on a sales call with a company and there’s an engineer there to tell you about how they built the product, particular features that they really like, that kind of stuff. It’s also rare that you can be in the Slack group and DM an engineer directly to ask them a question about how do I configure this? How do I work with this? And so, having that engineer there really helps build this rapport.

Abby talked extensively about customer support and shared lots of really good thoughts, but I want to throw this department up here because escalations were where we started. Both Abby and I started talking about this customer support engineering thing, and it was the first place that we experimented with this quarterly rotation idea. I also want to mention that being on the support team doesn’t mean just answering escalations and providing technical knowledge to the support team. A successful setup of an engineer on a support team means also building the features that ensures that those customer issues don’t happen again.

So, in the same way that sales engineers build features for customers in the sales pipeline, your support engineers should collaborate with product and engineering to action the features that will really help prevent future issues. So, you can see that drop in escalations per month, and a lot of that is because we found the patterns and where users were running into trouble and where they were writing into support, and built better experiences for them.

What about once someone is a customer but their experience isn’t broken? So, they’re not writing into your support team. Customer success works with customers who are configuring the product, who are discovering how to use it, and who are using it in their workflows everyday. Your engineers can help train users on how to use your product. They can go do customer visits. They can sit down and eat lunch with your users and show off their work or talk about things that are coming up that they’re working on.

I really love training customers, and working with them, and seeing them get excited about a feature that someone on my team built, or seeing them understand a complex part of our system. And if your customers are engineers like ours are, that’s even better because they’re going to want to know what it’s like, what the special sauce behind Code Climate is.

So, in both customer success and sales, the engineers are both training you customers and your representatives about the product. Your representatives may not know about the newest features or what’s in development. And if they do, they may not know about all of the edge-cases or the best applications for these features.

And ultimately, working with customers and with other departments makes your engineers more well-rounded. They’re still going to be great product engineers. They’re not going to stop wanting to build features. It’s fine. But they’re also going to be better at sales. They’re going to be better at support. They’re going to be better at marketing. And this will set them up for success both in your company and in their careers at large. We’re lucky because our quarterly rotation lends itself well to putting engineers on other teams and working directly with other customers.

Engineers also remember their customer interactions. And it informs the way that they build things in the future. They remember seeing that customers missed a CTA or ran into an error because of a configuration problem. And this memory or this interaction lingers with them the next time they build a feature or talk to another customer. And it really informs the way that they go about their work in the future. So, we’ve seen that engineers who do a tour on the customer support team come back to product and have a new way of looking at building product because they’re thinking about customers and they’re focusing on them more. To restate that, more experience with customers leads to better features.

More experience with customers leads to better features.

Engineers don’t like to be interrupted. We’ve talked a lot about building processes to reduce this interruption. Getting into flow is a really important thing for engineers. All the engineers in the room are nodding their heads. And we, as managers, can’t pretend to change that. We’re not going to change flow. We’re not going to solve that problem. But understanding why your support team is bugging you to fix issues and understanding their motivations and that they’re advocating for your users will really help your engineers grow and will make your product more stable.

And I’ll also mention, we made these slides and we did a rehearsal of this talk. And one of my engineers is like, “I don’t like the word nagging on this slide because I think it’s a negative thing.” And it is. It’s only nagging if there’s a problem and if it’s something that they don’t really want to be doing. Talking to support more often, doing these weekly check-ins, having the people there for them to discuss escalations with, will prevent the unnecessary interruptions and it won’t be nagging. It’ll just be talking to you. Support will be talking to you for a reason.

Abby Armada: Support doesn’t like to be interrupted either. We’re doing work too. So having dedicated time to talk to engineers really helps everyone and your customers.

And here’s that visual we presented before except we’ve closed the gap significantly. And our engineers are close to our customers than ever before.

Let’s review some takeaways and what you can do.

Gordon Diggs: Build empathy within your engineering team to foster a focus on your customers. Know where the gaps are between your engineers experience and your customers, and then close them. And start small. If you have a small engineering team, that’s great. Get everyone onboard right away. But if your organization is larger, find a pilot team and then recruit your leaders within engineering to spread their success to others.

Abby Armada: Make sure your processes promote good collaboration between support and engineering. Don’t be afraid to experiment and try new things and talk to each other. You’ll learn a lot about the ways that you can work together and make your customers happier.

Gordon Diggs: Include engineers in every stage of your customers’ life cycle. Find a way. Go to those teams in the strictly customer-facing departments and say, “How can engineering help here?” From marketing to sales to support to customer success, find out where the engineers can help and then put them on the teams for a significant amount of time.

Abby Armada: At the end of the day, if you’re truly customer focused, then happy customers will mean a happy business.

Gordon Diggs: Thank you. Obviously, Abby and I have lots of thoughts on this and we’ve talked about this for a long time. If you have thoughts about any of this stuff, please come talk to us. If these sound like particularly interesting problems to you that you want to help us solve, Code Climate is hiring for both engineering and support roles. So, please come talk to us. And yeah, thank you for coming today.

Abby Armada: Thanks!

Gordon Diggs: Thank you for listening to us.

Gordon is the Director of Engineering at Code Climate. He spends his days managing and growing the engineering team. When he is not at work, he can usually be found at the nearest record store or at home cooking lasagna.

Abby is the Customer Support Lead at Code Climate, and is passionate about great customer experiences. She’s currently working on developing her team, scaling support infrastructure, and finding the perfect taco in New York City.

Read more at the source

Building a developer culture with InnerSource

In July, we hosted the first annual Code Climate Summit, a one-day
conference for leaders of engineering organizations who want to better
themselves, their processes, and their teams. Starting today, we’re excited to
start sharing videos and transcripts of the six great talks. First up is Panna
, Global Head of Engineering Developer Experience at
Bloomberg LP, with her keynote: Building a developer culture with


Transcript of talk

I’ll just start by introducing a little bit of my background, how I got here, and why I do what I do today.

I run something that we call Developer Experience at Bloomberg. How I got there is a slightly long-winded story so bear with me.

I started off in the ‘90s working as a late night lab assistant in India, which then led me to supporting the manufacturing sector and then the financial sector. When you joined as a consultant, you literally did everything. You were an app developer. You were a system administrator. You were a DBA. You were the person who wired up the cables and got into the machine room and did everything.

That then led me to deciding to follow databases. I became a database consultant and moved to Asia in the financial sector. Long story short: I landed up here in New York City, working in the financial sector again, focusing on application development and databases, and then system engineering, and performance engineering. This long-winded story comes back to why it puts me in this unique position for Developer Experience, as I discovered that – going through these various roles in these various countries, encountering different types of developers – the one thing that I developed a lot for was empathy.

I realized that every single personality type in computer science, varied system administrators, DBAs, app dev, web developers, have their own personality types. We all have our own quirks, our own religion, and do things in a very particular way that’s unique to us. So “it’s my way or the highway” and “I’m god’s gift to mankind”.

That made me develop a lot of empathy that led me to saying, well how can we make our developer experience across the spectrum of people different? Because we all actually want to work with each other. It’s not that we don’t want to work with each other. But we are so myopic in the way that we want to develop our piece of the puzzle, that we don’t see the other person’s point of view.

Two and a half years ago, I got approached by Bloomberg to try and explain how should we be developing software. I had been at Goldman Sach’s for 16 years and done various, various roles. And as a tech fellow at Goldman we were driving technology in a particular way. When I started talking to Bloomberg it was really about how can we continue to embrace change, and keep our developers excited, and continue to develop quality software, and be the product leader that we have been for so long. So we created this team called developer experience.

So basically, my team does anything and everything from owning the tools that are required by the developers, the process, the framework, the training, the documentation. I even got a, this is a real story, I even got asked if we were responsible and we would take care of having enough male bathrooms. But don’t quote me on that one.


Let me make a one second pitch for Bloomberg: Since the majority of people here are from New York City, you’ve obviously heard of Mike Bloomberg. Anybody not heard of Bloomberg the product company? All right. Awesome. So just to make sure we’re all on the same page, we’re a product company based in New York City predominantly, and we’re in the business of providing information. 5,000 engineers across the globe who work on developing the platform on Bloomberg Terminal. You’ve heard of Business Week and you’ve heard of the Bloomberg media news outlets, etc., but our predominant product is the Terminal.

I said we were in the business of providing information. As you can imagine, it’s been 30 years of development that have gone into building the Terminal, to be able to provide very high touch individuals with specialized functions to get the information they need. Whether it’s news, trade related, analysis, portfolio management, to finding out the current events that are happening. So we have to really, really work to provide low latency information and also a broad variety of information.

You can imagine that everyone is working on different functions. Could be for the equities business or it could be for the news media or it could be for some of the back-end functions, just to develop this Terminal itself.

When I got in, what we discovered was that we collaborated a lot. There was a lot of collaboration, but we collaborated the old-fashioned way. Spoke to people. Put in tickets for other people to do things. We were transparent about the fact that your ticket is going to wait for me to finish what I’m working on, or it has to get prioritized by whichever product owner of whichever team I’m working in.

I started exploring the concept of how can we bring open source best practices in house, inside the enterprise. As we started exploring that, we called it “collaboratively sourced” or we called it “internal open source” and finally some of our active developers came up with the fact that hey, this thing already exists in the market. It’s called InnerSource.

Anybody hearing InnerSource for the first time? Cool. It wasn’t a well-known term. We hadn’t heard about it. We were just wanting to bring the best practices in house. After going through a few iterations, we settled on saying fine, we’ll call it InnerSource. Of course, having spent the past 20 years in the industry, my fear was InnerSource would get thought of as developers outside the company working on it. But in any case, supposedly O’Reilly had coined the term 17 years ago, so the term existed.

About 17 or 18 companies, decided to collaborate to see if we could make InnerSource a much more popular term. But more importantly for me, was really bringing those practices in house and sort of breaking down the barriers that exist within an enterprise.

What I discovered is: Most enterprises have grown up organically over the years. It’s very easy in a startup to start with a clean, fresh slate and work on things, and be open about it, be transparent about it, be able to look at each other’s code, etc. In an enterprise that’s evolved over 20 years, you’ve grown up in silos. Thing have got added on as they got added on, so people hold on to their little bit of the codebase. That’s how they’ve grown. What we needed to do was figure out a way of breaking down those barriers and really embracing some of the benefits.

I list out the benefits here – transparency, collaboration, innovation, community, reuse, mentorship, self help – but there are obviously more benefits to open source than these. What I was really focusing on for our developers is collaboration and a lot of the community and the camaraderie that can be built up, so that we have empathy for each other. We are able to work with each other without getting protective about it. That’s pretty much the theme that started it.


So how did we go about it? Initially, it was really just talking about the concept. Just like how I asked you, who has heard about InnerSource and a lot of you hadn’t. When I spoke about collaboratively sourced, it was met with of course, a lot of skepticism. There was a lot of critiques. It was like it’s just not going to work. Like, why the hell would we do this? We are already working within our teams. We have enough work to be done within our teams. So the first thing was really about introducing the concept and saying, at least just be open to the idea that I can go across an enterprise team, organizational, artificial boundary and look at somebody else’s code and add value or vice versa. Have somebody else who may have something to offer, may have some sort of expertise, or just be interested in that area, to come and look at my codebase.

Just sharing that idea and getting the excitement created within the development community was the first step that we went about. I did think about, and everybody asked me, what sponsorship do we have? Do we have management buy in? What I had done was just tested the waters, to say, if I go and ask for permission, will it get squashed down or should I push for this collaboration social experiment and beg for forgiveness later?

Fortunately, one of Mike Bloomberg’s business principles state that take the risks and beg for forgiveness if needed. So that’s how we decided to say let’s start getting the developers excited about the fact that hey if you are waiting on somebody to do something for you, you can offer to help them out. In the offering to help them out is where the whole drama lies, but the concept was well understood.

Identify early adopters

Once the concept got understood and a few of the developers started embracing the concept, that’s when we said, let’s identify early adopters. I was in this unique position to say, I own the tools. I own the tools that developers use, so let’s open up these tools and if the next person who asks me for a feature, we’re going to tell them that they can help us and contribute to our codebase and make that happen for themselves.

Two and a half years later it sounds way easier to say what I’m saying. Trust me, all my team leads hated me. Literally hated me. They were like, why is she bringing all this into this? We were functioning just fine, right? Because I mandated that we open up our own codebase.

So I identified early adopters. In your organizations it really is about finding out what’s that sweet spot, what’s the low hanging fruit, which of the less resistant applications, functions, codebases would be able to accept external contributions. Then, it really was about mobilizing people to actually start to engage and contribute.

Engage (at all levels)

This is where we were slightly unique because we already had a few years ago started off with engaging our development community a lot more. We have people who are called partners, we have people who are called champs, we have people who actually within their own areas, are responsible for being the people who engage with the other teams, continue to collaborate with the other teams, as well as pushing best practices. So it was really leveraging this champ/partner community. We have tech reps as well, really embracing and getting them to embrace it and then they started engaging with us to say, can they start contributing to these early repositories because it really was setting precedence.


Once we had the precedence set, because we had a few repositories where people from other teams could now contribute code, we could then start to evangelize a little bit more and even tell middle management, or our managers, or our senior management, that this concept can work. We just have to push the envelope a little bit. So really it was the engagement on all levels and the evangelization.

This is really about tooting your own horn for the project itself. So for InnerSourcing to be accepted, for people to start understanding the terminology, we created an internal function called sauc. It’s just a play on Innersource but it’s called sauc, so when you type in sauc it’s nothing but a list of repositories with pull request, books, help wanted tags, and it’s just a very, very simple gaming methodology. Not to call out people, but to call out projects that have lent themselves to being contributed to as well as extended.


That’s a key, but in our case, we got sponsorship way later into the game when we had already sort of proved concept, to say this can actually work.


Organizational structure, culture and dynamics

So I mentioned change is hard. Tell me to walk on the left side of the road when I get out of Penn Station and I go berserk because I have a set pattern. I walk on the right side of the road to get from Penn Station to Park. That’s just a simple example. Try telling any one of you to change the way you write code or change the way you do certain things and you’re going to be questioning why it needs to change because it works for you. The way I develop software on my laptop works for me, why the hell should I change anything, right?

It was really, really hard to get people to accept the fact that now we are going to introduce the ability for other people can contribute to your codebase. You can contribute to other people’s codebase. Keep in mind, I’m talking about history. People at Bloomberg have been around for 20 years. So they have set patterns. They have ways of working. It’s not like we’re getting in or changing 5,000 people overnight. When new people come in, of course they want to be able to bring their new best practices with them. Being able to talk about the fact and getting them comfortable with this organizational structure should not be the barrier which holds you from going across organizational boundaries, which is something that we really had to work on. It was a lot of talking. A lot of sharing stories and convincing people that this is good for them.

There is a lot of pride in the work that we bring, the work that we have done. Sure, I took some short cuts at some point in time, but I had pride then and now, because I don’t want to feel exposed, I’m reluctant to open it up and allow people to critique it or comment on it.

So, code pride is one of the things that I found was the biggest road block because people have evolved over time and they have become better, they have embraced new things, so if you go back and open up something that somebody wrote 15 years ago, and start saying this person doesn’t know what the hell he was writing about. Yeah, maybe he didn’t. Maybe he or she had no clue what they were doing at that point in time, but they have evolved now.

What became really interesting is creating a culture where it was okay to talk about code that was written earlier, but it was not okay to say nasty things. It’s easier said than done.

We use IB, Instant Bloomberg, as our internal chat. I can’t police multiple chat rooms to say people shouldn’t say nasty things to each other. I’m guilty of saying, this is just ridiculous stuff, I’m not even going to bother commenting on it or they shouldn’t have done it this way, but we had to start creating cultural awareness that you cannot bite somebody else’s head off, you cannot be nasty, and creating the informal policing.

I talked about champs. I talked about tech reps, I talked about evangelists. Creating that culture where developers feel that they can hold accountable other people who may be getting vicious, or even slightly aggressive, on IB or mail or even comments in the codebase itself. Really embracing the fact that you’re coming from different places and you’re contributing at different times and therefore, passing a judgment is not acceptable, and we have to find more constructive ways of changing that codebase and making that change.

Accepting contributions is hard work

As soon as we started getting over that barrier, the next thing was, well, accepting contributions is hard work. And then taking ownership of somebody else’s contribution, because I may still own the product and I’m responsible for when the tool breaks. It then became the concept of saying, well we have to then define how am I going to discuss the feature request coming in, or how am I going to deal with the idea, the suggestion, the code contribution coming in, and what are the rules of engagement? What are my guidelines? Or, what’s my point of entry? And what are we willing to engage with and what are we not willing to engage with.

Now, in the whole Agile transformation, it’s tough because you need the product owners to buy in, you need the team to buy in, you need to be able to set aside the time to accept contributions that weren’t part of your regular stream of work coming in.

Trusted committers

This is where we started to talk about expanding our trusted committers beyond the organizational or product team. Could we grow our list of trusted committers beyond the organizational boundaries? This was really, really hard work. It of course meant that there was a lot more coaching involved. There is a lot more shadowing involved. A lot more people who are getting familiar with your codebase, before they can actually help out with code reviews, by being able to review your pull request, etc.



It’s complicated. There is a lot of risk to it and the risk is if I’m owning a production function and you’ve contributed code to it, and something breaks, I’m going to be held accountable for it. I’m going to have to support it. I’m going to have to get up at night and deal with whatever broke because you didn’t test some edge condition. It’s easier just to say no we can’t do this. So that risk was the biggest one that we had to overcome, to start to encourage a higher standard of acceptance such that we could take this risk of getting contributions from other people.


The next one is commitment, both time and team.

Time is the easier one because – and I got into trouble for saying this in Europe, but – everyone does 120%. Some people even do 140%. It is basically going beyond your day job and providing some contributions to other teams or even functions you’re interested in, but Europe has some very strict guidelines around the 80/20 and just padding 100% of the time that they work on, so I was told to work on trying to figure out how we could adjust our external or additional contributions to within 100%.

So we went back to this concept of 80/20, 90/10, call it what you want. We didn’t really advertise it internally, but we took a few teams and said, let’s try it out, to see how much does our work really suffer, if we do set aside 10% of our time to work with other teams or things that we may be interested in personally. The product buy-in on this was really, really tough, as well as some of the teams just didn’t embrace it initially.

I won’t say this was a solved problem, but we continue to work on it and I truly believe persistence is going to pay off. Over time, both project owners as well as the teams will realize that having that 10% flexible time for me to work with other projects that I’m maybe interested in, is actually a motivating factor for me because I don’t mind the drudgery stuff I may be doing or I don’t mind some of the other things that may come on my plate, because I have that 10% excitement of what I want to do. Or try out a new thing. Or be able to look at a new exciting project that’s going on in some other part of the firm and be able to do that.

It also led to this whole confusion between projects and products and projects done by teams versus what the product ownership is. The reason I bring that up is, even though the Terminal is one big product, we have multiple products within the Terminal, and then we have multiple projects that go across multiple products – so there’s no one size fits all. So we would have to figure out what the right distribution of work across projects and products is. As I said, in an enterprise having evolved over time, time accounting or people accounting became a question, to say, where does a developer’s time go?

Again, we went back and forth on this, saying we shouldn’t worry about where a developers time goes, we should worry about the progress that we’re making with moving the product forward or the project forward, and the innovation that we’re bringing in, the new functionality that we’re bringing in, which is what our higher end clients need.

That is again a lot of proving by actually getting some initial teams to be able to participate in this sharing concept, move the product forward, and getting the product owners to actually talk about the wins. We had some crazy user stories being shared internally about just functional changes that we were able to move forward much faster because multiple teams were now able to collaborate at a completely different level.

Rules of Engagement

I spoke about rules of engagement. I spoke about people defining, so we didn’t set a one size fits all. We didn’t say this is the participation criteria across the board, we said, every team can now put in a contributing.md and I said team but I really mean a repository. Any repository, you can define your rules of engagement. You must have a contributing.md otherwise you’ll get a default one. And in that, your rules of engagement and your acceptance criteria for contributions. Then what we started doing was scraping all the repositories to find these contributing.md’s as well as started to showcase where people were being forthcoming about defining their rules of engagement really well and their acceptance criteria as well as where it was then possible for others to start contributing to it.

Buy in

I already spoke about the buy in. We had to get and we cheated again a little bit on this, we went around the product owners and found out which product owners would lend themselves to this concept of inner sourcing, who would lend themselves to saying, some portion of my time, 5%, 10% of my developer time, can be spent and we had various sort of negotiations with them which is to say, fine, I can move tickets in some other project as long as it helps my project move forward. If I’m waiting on somebody, I have the leeway to contribute to that particular project and move it forward for myself.

So we engaged with some of the earlier product owners. Once we had gone through six months with them, we got them to talk to other product owners and showcase how it was beneficial – not just in terms of moving the product forward, but in terms of the developers engaging with the other development teams on a much deeper level. We were engaging and collaborating at the codebase.

It also helped with keeping us motivated, and having options. I now like what’s happening in the other team, and I have mobility options to actually move to the other team and help them move things forward. We had various occasions where somebody moved to another team for six months, helped the move a particular area forward and then move back. It just opened up a variety of avenues for us.

It’s my baby

Finally breaking the myth about it’s my baby. How many of you still hold onto stuff saying that it’s your baby? I used to be one of those people 10 years ago. You have to learn to let go. Literally, it’s not going to get better continuing to be your baby, right? At some point of time, even parents let their kids go. So, the breaking down that concept of “this is my baby and I’m not going to let go” was probably the toughest one. So we had to continue to push on those people who thought it was their baby and work with them on an individual one on one basis. I will claim we still have a few. I think we’ve broken down most people to at least open the doors. They still think it’s their baby but they’ve opened the doors. We still have a few more to go and that’s been the toughest one.



I started my talk with what got me to this place which is I had experiences across Asia. So in India, it’s a very hierarchal society. It’s changing, so don’t throw darts at me right now, this was 20 years ago. We had just started. The computer boom had just started in the mid 90s and it’s very hierarchal. You get delivered a set of tasks. You move forward. Nothing. You’re not allowed to question or offer a suggestion.

Unfortunately, I was sort of badly placed there. I was born and brought up in India, but I was badly placed there because my dad sort of encouraged us to really do whatever the hell we wanted. We were two girls. When I started working, it was really tough for me to hold back because if I didn’t agree with something, I would say, this is just wrong. I’m not going to write it this way. That was my individual experience. However, I discovered – as I moved into system administration and database administration, I was one of 50 to 100 guys sitting there – I was okay working with them because I had grown up that way. But the other girls that started joining our teams, were not okay with it. They found it difficult. That’s when my head started clicking and I said every single individual is different. I could deal with it but some of the other aren’t dealing with it.

Then I discovered that it’s not a gender thing. It’s individuals. They are individuals who are soft spoken. There are individuals who don’t mind putting their ideas out there. There are individuals who are more thoughtful. There are individuals who will speak off the cuff and we have to account for those individuals, for those different individual traits when we start collaborating and we have to try to create a practice where everybody is sort of self-policing themselves so that we are more inclusive. That’s something that just doesn’t come naturally. But we had to start talking about it and as I held more and more intimate 20 people, 30 people sessions on pointing out behaviors within instant messaging, pointing out behaviors with updates on tickets, pointing out behaviors with code reviews that were nasty, or appear to be nasty. I am a direct person. I can ask a direct question. I didn’t think twice about it. But the person on the other end or a newcomer on the other end, may perceive it as hey I’m not going to touch this thing because the next thing I do my head is going to get bitten off and I just don’t have time to engage with it.

That really is something that each one of you, individually, if you’re just conscious about the fact that the other person might be coming from a different place, we can collectively improve the developer culture across the board. Just by being conscious of the fact that somebody is coming from a different place and may have a different criteria in mind.

(By the way the picture up there is a fish tank at Bloomberg and I always get fascinated because every fish behaves differently and that’s why I wanted to put that picture in there.)

Building a culture of mentoring/coaching

Building that culture of mentoring, of coaching, is every individual’s responsibility, and just making people conscious, all developers conscious, of the fact that this is what’s happening. I found a very unique way to do it. Get a bunch of people in the room, of varying experience and at different levels in the organization. So two years’ experience to 20 years of experience and ask very provoking questions on how they would react to a certain statement out there or how they felt when something happened in IB and instant messaging or what would they think if they had a newbie question to ask. Who would they ask that question to? How would another person who was sitting there react to that newbie question?

When we started having these conversations, it was very interesting because people discovered, just observing and hearing other stories in the room, behaviors that they probably needed to change themselves. It wasn’t anybody judging it was just sharing stories. And we still do that to bring developers together even in meetups, we ask those questions, especially around being more inclusive, about the varying range we have.

Community driven

As I said, it’s all community driven. We leverage the champs, the partners, the SI partners a lot. I encourage developers to be their own police so we don’t have top down mandate. We don’t have a management decree coming down that thou shall do this. It is all based on grass roots efforts. We do have support now though.


Of course, it’s not, even though we are all individuals and every single individual contributes to it, nothing moves forward without us moving as a team including this collective developer audience as a team.

[Indicates slide outlining roles within team and a picture of a dog sled race team] The reason I very specifically picked this sled racing team is to point out the very, very simple difference. They are not all eight equal dogs. Each dog has its own personality. If they are trained, they are trained for the position they are running at. So the leader dogs versus the end dogs, have very different roles to play and part of the training goes into training those dogs that way. Those dogs don’t behave out of their roles. They behave in their roles.

I don’t mean to compare us to a sled dog team, but the reason I point this out is because when we’re functioning in a team, you will realize that and you probably already know this, that everybody plays a different role, apart from the official role. So some are more mentors. Some are more coaches. Some can ask more questions. Some are more outspoken and will take over the meeting or take over the design discussion and it’s their way or the highway.

What I’m really asking for in this is the roles should be really determined for the team as to who is doing what, but we should really embrace the differences and ask people – or make people comfortable – to participate in the team, even though the roles are clearly defined. Where people take over meetings or where people may be bullish about something, find somebody to coach them, mentor them, to change that behavior so that the team itself can evolve and become a better team.


When I go back two and a half years, I said okay, what are our silos? Why are we so siloed? Where is the collaboration happening? We didn’t have the appropriate tools. Tools did play an important role. Repositories were permissioned individually. I won’t even go back to whether we were using CDS, Purecase, SVN. Earlier versions of Git. We are now on Git but the reason I bring that up is because repositories are individually permissioned traditionally. That’s how we grew up in silos. So the first thing we did was say, okay, we’re going to open up our repositories.

“Oh no, no, no. You can’t open up my repository because I have sensitive stuff there.” What sensitive stuff? “Oh we store passwords there.” I’m like that’s not sensitive stuff. Take your passwords out. The repo is not the right place to store the password and keep a closed repository. This is a true story by the way. But it always came up as “oh we have sensitive code. We cannot open it up.” As I started questioning what is the sensitivity of that code, it always boiled down to one of three reasons. Three invalid reasons and I’ll tell you the one valid reason.

“I have some secure stuff there that I shouldn’t have put there but I put it there so therefore I can’t open it up beyond my team.”

“I have some…” am I allowed to say shitty code? “I have some really badly written stuff here. We’ve made some bad decisions. We have to clean it up before we can open it up. So we’ll clean it up and then we’ll open it up.” Of course it’s never going to get there.

And last but not the least is “oh, because these are low level APIs and we don’t want them to know what we’re doing and be able to leverage some of the other stuff, we can’t open it up because they can’t see the decisions we’re making because then they’ll directly access our codebase.”

The one valid reason which is probably true across the industry is 10% or 20% of the codebase probably was truly specific to competitive business advantage, maybe some algorithms. Maybe some business logic, etc. Right? That was the key, the glue, the thing that sold your product. 10%, 20% and this is just an off the cuff number.

The minute we said that all repositories are open by default, we started working through these restrictions that were laid out to us and we started just saying, you’ve got to open a repository by default. If you want to close it up, you’ve got to talk to me. Then of course I’ll wear you down with asking you questions and then I’ll take you to your CIO and wear you down with asking questions.

That was our first thing. We were pretty scared and skeptical that because of this rule, we would not get people moving into this concept, but as I go back to how I started my conversation with the early adopters, with the evangelist that we created, with the buzz that got created about having a tool set that actually works, we started getting people moving their codebases over from whichever legacy repository they sat on. And we started getting more and more requests for features such that the tools would enable them to embrace this change.

Agile brought in some rules. So our Agile coaches came with some very specific guidelines. We had to then coach our coaches on some of the ‘religion’ that they brought in versus some of the common sense which is just we needed to work with within our enterprise because our enterprise had grown up that way. Just to get them to recognize the differences and work with us was a challenge, but it’s something we worked through.

What we also discovered was thinking outside the box. In a traditional enterprise company, we didn’t have hackathons. We didn’t have coding days. We didn’t have space for ideation. So we started sponsoring a lot of those kind of activities. We have something called a Dev-X hack a thon which runs for a couple of days. We have it two or three times a year. We have it in London and we have it in New York. We do other cool stuff. The swag, the prizes, but the fact is that the ideas that come out of Dev-X hackathon get sponsored to get built up as a project.

So a lot of the tools integrations that we built up, were a result of the hack a thons. The idea started in a hackathon and then got sponsored and we could go ahead and build out that integration, amongst the other various SELC tools.


We found this to be a really, really interesting way of encouraging collaboration across teams. All the newcomers or the new classes that come in, I always encourage them, as part of their training to take on stretch projects. If they are interested in taking on a stretch project, I encourage people to work with their management teams to allow them to do those stretch projects. That seems to have helped us in moving the agenda forward because as people start to see the benefits both for the people that are working in their teams as well as the projects that they are working on, everybody has got a win, win situation.

What we also started to do was say, if I have opened up my repository, I’m going to have some issues being tracked and I can put in help wanted tags on the issues that are low hanging fruit for me and I can get others to contribute to them. I mentioned this function called sauc. You run sauc, you get to see a list of repositories which have help wanted functions. For somebody who doesn’t know specifically where they want to get started, or they may have an idea they can just go look at the help wanted issues, pick them up, and work on those. This was a really easy way for us to say, do it yourself. Self-serve yourself for moving something forward.

What we also did was shared infrastructure projects. Everything with software infrastructure which was related to the tools we opened it up. Some places it’s harder to accept contributions especially when it’s really related to compiler flags which go across the entire stack, so those are harder contributions to accept. But some of the simpler contributions are really straightforward. We also started to understand common pain points that came across because as you can imagine, we’ve opened up a codebase now. So I can look across my entire sauc repository and see everybody’s projects. If I have somebody working on containers, I can see who else is working on containers and create those opportunities to pull them together so that we don’t have everybody doing their own thing. And pull those projects together and actually sponsor it as a project so we can collectively work on that one functionality or feature or piece of infrastructure that we’re developing.

And with that, I’m going to say and encourage all of you to get started. You can look at innersourcecomments.org. I have references at the end and the three things, I called it my three pronged approach initially when I started, which is identify the early adopters, evangelize, and then get the sponsorship. I hope this has been useful for you.

Panna Pavangadkar is the Global Head of Developer Experience for Bloomberg’s Engineering group. Building a great developer experience is at the heart of what her team does, providing a development environment that enables application developers to focus on building the company’s core product: the Bloomberg terminal. Her past experience with databases, operating systems, infrastructure and application development in various engineering roles – as Technical Fellow, Vice President at Goldman Sachs (New York) and JP Morgan (Singapore) and NIIT (Pune, India) – gives her a unique perspective to understand and work to introduce and encourage wide scale change within the company’s developer and engineering communities.

Read more at the source

Our 10-point technical debt assessment

In the six years since we first introduced radically simple metrics for code
quality, we’ve found that clear, actionable metrics leads to better code and
more productive teams. To further that, we recently revamped the way we measure
and track quality to provide a new, clearer way to understand your projects.

Our new rating system is built on two pillars: maintainability (the opposite of
technical debt) and test coverage. For test coverage, the calculations are
simple. We take the covered lines of code compared to the total “coverable”
lines of code as a percentage, and map it to a letter grade from A to F.

Technical debt, on the other hand, can be a challenge to measure. Static
analysis can examine a codebase for potential structural issues, but different
tools tend to focus on different (and often overlapping) problems. There has
never been a single standard, and so we set out to create one.

Our goals for a standardized technical debt assessment were:

  • Cross-language applicability – A good standard should not feel strained when
    applied to a variety of languages, from Java to Python to JavaScript. Polyglot
    systems are the new normal and engineers and organizations today tend to work in
    an increasing number of programming languages. While most languages are based on
    primitive concepts like sequence, selection, and iteration, different paradigms
    for organizing code like functional programming and OOP are common. Fortunately,
    most programming languages break down into similar primitives (e.g. files,
    functions, conditionals).

  • Easy to understand – Ultimately the goal of assessing technical debt with
    static analysis is to empower engineers to make better decisions. Therefore, the
    value of an assessment is proportional to the ease with which an engineer can
    make use of the data. While sophisticated algorithms may provide a seemingly
    appealing “precision”, we’ve found in our years of helping teams improve their
    code quality that simple, actionable metrics have a higher impact.

  • Customizable – Naturally, different engineers and teams have differing
    preferences for how they structure and organize their code. A good technical
    debt assessment should allow them to tune the algorithms to support those
    preferences, without having to start from scratch. The algorithms remain the
    same but the thresholds can be adjusted.

  • DRY (Don’t Repeat Yourself) – Certain static analysis checks produce highly
    correlated results. For example, the cyclomatic complexity of a function is
    heavily influenced by nested conditional logic. We sought to avoid a system of
    checks where a violation of one check was likely to be regularly accompanied by
    the violation of another. A single issue is all that’s needed to encourage the
    developer to take another look.

  • Balanced (or opposing) – Tracking metrics that encourage only one behavior
    can create an undesirable overcorrection (sometimes thought of as “gaming the
    metric”). If all we looked for was the presence of copy and pasted code, it
    could encourage engineers to create unwanted complexity in the form of clever
    tricks to avoid repeating even simple structures. By pairing an opposing metric
    (like a check for complexity), the challenge increases to creating an elegant
    solution that meets standard for both DRYness and simplicity.

Ten technical debt checks

With these goals in mind, we ended up with ten technical debt checks to assess
the maintainability of a file (or, when aggregated, an entire codebase):

  1. Argument count – Methods or functions defined with a high number of arguments

  2. Complex boolean logic – Boolean logic that may be hard to understand

  3. File length – Excessive lines of code within a single file

  4. Identical blocks of code – Duplicate code which is syntactically identical
    (but may be formatted differently)

  5. Method count – Classes defined with a high number of functions or methods

  6. Method length – Excessive lines of code within a single function or method

  7. Nested control flow – Deeply nested control structures like if or case

  8. Return statements – Functions or methods with a high number of return statements

  9. Similar blocks of code – Duplicate code which is not identical but shares the
    same structure (e.g. variable names may differ)

  10. Method complexity – Functions or methods that may be hard to understand

Check types

The ten checks break down into four main categories. Let’s take a look at each of them.


Four of the checks simply look for the size or count of a unit within the
codebase: method length, file length, argument count and method count.
Method length and file length are simple enough. While these are the most basic
form of static analysis (not even requiring parsing the code into an abstract
syntax tree), most programmers will identify a number of times dealing with the
sheer size of a unit of code has presented challenges. Refactoring a method that
won’t fit all on one screen is a herculean task.

The argument count check is a bit different in that it tends to pick up data
and primitive obsession. Often the solution is to introduce a new
abstraction in the system to group together bits of data that tend to be flowing
through the system together, imbuing the code with additional semantic meaning.

Control flow

The return statements and nested control flow checks are intended to help
catch pieces of code that may be reasonably sized but are hard to follow. A
compiler is able to handle these situations with ease, but when a human tasked
with maintaining a piece of code is trying to evaluate control flow paths in
their head, they are not so lucky.


The complex boolean logic check looks for conditionals laced together with
many operators, creating an exploding set of permutations that must be
considered. The method complexity check is a bit of a hybrid. It applies the
cognitive complexity algorithm which combines information about the size,
control flow and complexity of a functional unit to attempt to estimate how
difficult a unit of code would be to a human engineer to fully understand.

Copy/paste detection

Finally, the similar and identical blocks of code checks look for the
especially nefarious case of copy and pasted code. This can be difficult to spot
during code review, because the copied code will not show up in the diff, only
the pasted portion. Fortunately, this is just the kind of analysis that
computers are good at performing. Our copy/paste detection algorithms look for
similarities between syntax tree structures and can even catch when a block of
code was copied and then a variable was renamed within it.

Rating system

Once we’ve identified all of the violations (or issues) of technical debt within
a block of code, we do a little more work to make the results as easy to
understand as possible.

File ratings

First, for each issue, we estimate the amount of time it may take an engineer to
resolve the problem. We call this remediation time, and while it’s not very
precise, it allows us to compare issues to one another and aggregate them

Once we have the total remediation time for a source code file, we simply map it
onto a letter grade scale. Low remediation time is preferable and receives a
higher rating. As the total remediation time of a file increases, it becomes a
more daunting task to refactor and the rating declines accordingly.

Repository ratings

Last, we created a system for grading the technical debt of an entire project.
In doing so, we’re cognizant of the fact that older, larger codebases naturally
will contain a higher amount of technical debt in absolute terms compared to
their smaller counterparts. Therefore, we estimate the total implementation time
(in person-months) of a codebase, based on total lines of code (LOC) in the
project, and compute a technical debt ratio: the total technical debt time
divided by the total implementation time. This can be expressed as a percentage,
with lower values being better.

We finally map the technical debt ratio onto an A to F rating system, and
presto, we can now compare the technical debt of projects against one another,
giving us an early warning system when a project starts to go off course.

Get a free technical debt assessment for your own codebase

If you’re interested in trying out our 10-point technical debt assessments on
your codebase, give Code Climate a try. It’s always
free for open source, and we have a 14-day free trial for use with private
projects. In about five minutes, you’ll be able to see just how your codebase
stacks up. We support JavaScript, Ruby, PHP, Python and Java.

If you’re already a Code Climate customer, you can expect a rollout of the new
maintainability and test coverage ratings over the next few weeks – but if
you’d like them sooner, feel free to drop us a
, we’re always happy to help!

Read more at the source

A new, clearer way to understand code quality

It’s been six years since Code Climate introduced radically simple metrics for code quality: a grade
point average (GPA) from 0 to 4.0 for repositories, and a letter rating for every file. Today, we’re
going further by completely revamping the way we measure and track quality, and shipping a bunch of
new features to go along with it.

Every repository and file will now receive two top-level ratings:

Maintainability: An estimate of technical debt in the repo based on our standardized 10-point
assessment which looks at duplication, complexity and structural issues.

Test Coverage: The percentage of covered lines compared to the total number of lines of code.
(We ingest test coverage information from your continuous integration server using our new
universal test reporter.)

Here’s how it all comes together on your new repository Overview page:

On top of this foundation, we’re launching five major, new features:

Unified Code tab with maintainability and test coverage side-by-side

Gone is the isolated Test Coverage tab, which was an additional place to get a full view of your
overall quality. We’ve fully integrated test coverage information into the Code tab, and everywhere

Drilling down, you can access per-file quality statistics on the new Stats sub-tab:

In-app configuration of code quality analysis

It’s now possible to control the way we analyze your code quality using simple, in-app
configuration. Easily select which checks to run and exclude files that are not relevant.

You can also easily browse and enable open source static analysis plugins, taking advantage of the
30+ tools that are compatible with our open, extensible platform.

For those who prefer finer-grained control, or wish to keep their configuration in version control,
file-based configuration using .codeclimate.yml remains available. If checked in, the
.codeclimate.yml takes precedence over the in-app configuration.

Reorganized and expanded Trends area

With all this new data available, we thought it was a great time to add some organization to the
Trends tab. The left sidebar now makes it easy to get right to what you’re looking for.

We’ve also added a new chart allowing you to see the total amount of technical debt in the project,
and the overall project maintainability, all in one place.

Improved quality alerts via Slack and email

Our pass/fail pull request statuses are great for ensuring that every change merged into your
codebase meets your quality standards. However, every once in awhile something may slip through the
cracks. For these situations, we’ve revamped the quality alerts we send via both Slack and email.
Here’s what it looks like in Slack:

And an email reporting some test coverage changes:

Alerts are sent when any letter rating changes, or any new files are created with “C” or lower
ratings. We think this new functionality is an excellent complement to our recently-launched
customizable issue alerts.

All of the above and more available via a REST API

OK, so this one isn’t completely new, but it’s gotten much better and we couldn’t resist including
it. Over the past year we’ve been developing Code Climate with an API-first methodology. As a
result, the data that powers all of the above features is available over a
robust REST API for you to take advantage of as you see fit.
Here’s a few examples:

Get the current ratings for an individual file

$ curl \
  https://api.codeclimate.com/v1/repos/:repo_id/ratings \
  -H "Accept: application/vnd.api+json" \
  -H "Authorization: Token token=<token>" |
  jq '.data | .[] | select(.attributes.path == "path/to/file.rb")'
  "id": "59c41d36d0c53d0001000001",
  "type": "ratings",
  "attributes": {
    "path": "path/to/file.rb",
    "letter": "A",
    "measure": {
      "value": 220,
      "unit": "minute"
    "pillar": "Maintainability"
  "id": "59c41d36d0c53d0001000002",
  "type": "ratings",
  "attributes": {
    "path": "path/to/file.rb",
    "letter": "A",
    "measure": {
      "value": 92.40506329113924,
      "unit": "percent"
    "pillar": "Test Coverage"

Get a time-series of test coverage information

$ curl \
  https://api.codeclimate.com/v1/repos/:repo_id/metrics/test_coverage \
  -X GET \
  -H "Authorization: Token token=<token> \
  -H "Accept: application/vnd.api+json" \
  --data-urlencode "filter[from]=2017-08-01" \
  --data-urlencode "filter[to]=2017-09-01" |
  "data": {
    "id": "59c41d36d0c53d0001000001",
    "type": "metrics",
    "attributes": {
      "name": "test_coverage",
      "points": [
          "timestamp": 1501459200,
          "value": 98.57142857142858
          "timestamp": 1502064000,
          "value": 98.50960160504442
          "timestamp": 1502668800,
          "value": 98.53490376328641
          "timestamp": 1503273600,
          "value": 98.53314527503527
          "timestamp": 1503878400,
          "value": 98.5405557114791

Just head over to your API access page of your user settings
area to generate a personal access token and get started.

These features will be rolled out to all repositories on CodeClimate.com starting today.

Wrapping Up

We hope you’ll agree that these changes represent a dramatic leap forward for Code Climate. As we’ve
been testing this functionality internally and with a small group of customers, we’ve found it
really changes the way we interact with code quality information day-to-day.

There may be a few rough edges as we refine and polish some areas. As always, if you have any
questions about anything, please don’t hesitate to get in touch.
Our fantastic support team is always here to help.

Read more at the source