Ember Hot Seat: Broccoli, Tooling, and Open Source

DeVaris kindly invited me to his podcast,
the Ember Hot Seat. We talk about the
Broccoli build tool, the importance
of good tooling, and why and how I contribute to open source. Listen to it
here, or read the transcript below (lightly edited for clarity):

DeVaris P. Brown: All right, welcome everyone. It is a Saturday morning here for me. I’m working hard for you guys to make sure that you have the content that you want, and another highly requested person for the Hot Seat is here today. Everyone, welcome Jo Liss to the Hot Seat. Say hello to the world, Jo.

Jo Liss: Hi, everyone. Hi, DeVaris. Thanks for having me on your podcast.

DeVaris: No worries. Thank you for coming on. I know we’ve been playing, I guess, Twitter-tag back and forth, but it’s good to have you on the show.

Jo: Finally, yeah. I’m excited.

DeVaris: Yeah. Now, I guess, give the people a little bit of an introduction to who you are, and what you do kind of thing.

Jo: Yeah, sure. So, I guess most people know me from my open-source work in the front-end JavaScript space. I spent a couple months working on a tool called Broccoli, which is for building client-side JavaScript apps mostly. So, if you have used the Rails asset pipeline before, Broccoli is sort of like that except it’s back-end independent and the architecture is a bit more modern, I guess. And for my day job I run a solitaire site which is ad monetized, and that’s starting to pay the rent.

DeVaris: Oh, wow.

Jo: Yeah, that’s like officially my full-time business now.

DeVaris: You said solitaire, right? Like the card game that came with Windows that everyone plays?

Jo: Exactly, exactly. It’s an HTML5 solitaire, so it’s right there in the browser. And you wouldn’t believe how much search traffic there is for solitaire, it’s ludicrous.

DeVaris: Huh. So what’s the URL so we can go ahead and get that out there so everybody -

Jo: Sure. It’s solitr.com.

DeVaris: Okay. Well, everyone knows that you can re-purpose games.

Jo: Very much. There’s no copyright on it.

DeVaris: Oh, wow. Wow, that is amazing. Okay. So, yeah, everyone knows you from Broccoli nowadays, but when I first saw you, you were pretty much the testing expert. And we probably won’t get into this too much, but is there a preferred testing route that you choose, or is it just, or do you have an opinion on the state of testing so far in Ember?

Jo: So, I think the new ember-testing package seem really good to me. I have not actually had a chance to use it because right now I don’t have a single Ember app that I’m working on.

DeVaris: Okay.

Jo: So, I’m really behind on all the testing stuff and so I haven’t had a chance to play with it.

DeVaris: Okay. Now, you’re also, and I had Robert Jackson on last week so I got to make sure I ask you about tooling and stuff like that. Like you said before, you’re responsible for Broccoli, and a lot of the feedback that I see in the community is which route is going to be blessed? Which set of toolings, which packaging is going to be the one? So I guess, kind of give people the benefits of using Broccoli versus Ember tools, or Ember App Kit, or things like that.

Jo: So, Broccoli is a very low-level build tool, and so the way it fits into the ecosystem is, we’ve been kind of using Grunt and Grunt is a task-runner. It’s not really a build tool, so if you use Grunt for building apps, then as your app grows it’s going to be slower and slower. So what I’ve seen with medium-sized teams, just like 5, 6 developers working on an app for a couple of months, and as the code base grows, the rebuild time gets to like 10, 15 seconds. And that is clearly unacceptable, right? Like you edit a file, and then you reload the browser, and there’s a 10-second delay, and that really affects your productivity, having a delay like that in your core feedback loop as a developer. So Broccoli is a dedicated build tool sort of like the Rails asset pipeline, as I said, and it plugs into Grunt, or you can plug it into other tools, as well. And the stuff that is coming up with Ember CLI (the successor of Ember App Kit) that is based on Broccoli now.

DeVaris: Oh, okay.

Jo: So, Ember App Kit was Grunt-based, but Ember CLI is based on Broccoli now, and Robert has actually been working on migrating the build process for Ember itself to Broccoli, which I think is very cool. And for me that’s kind of the really, the acid test for Broccoli to see if it holds up compiling a complex library like Ember.

DeVaris: Okay. Now look at that community collaboration. So, but what ended up getting you involved with Ember? Like what drew you to the community? And you were pretty prolific in the Rails community as well, so what kind of drew you to start contributing to the community as much as you did?

Jo: So I originally started working on a Rails app, a voting software app for universities, and the business never took off, but I was working on this Rails app, and I thought it would be great to have a good UI for that because it’s a really hard problem that needs a good UI. And I started doing kind of Ajax stuff with Rails and it pretty soon turned into a spaghetti mess. And then that was around the time Backbone came out, so I switched to Backbone and I used that for a while, and then I thought, well this is not going to scale either, like it is better, but it has a different set of problems now. And then I switched to Ember. And Ember, for like the first time, I was like, “Oh yes, this thing actually solves the problem correctly, like of binding data into the UI into the HTML and keeping it updated.” So that’s what got me into Ember, sort of the realization that it was really the correct tool for the job.

DeVaris: Okay. And then now, like I said, you contribute pretty heavily to tools and things. So what, I guess, what motivates you to participate in the community as much as you do and build these tools that pretty much everyone uses?

Jo: So, the way I look at it is– actually let me back up a step. So, I kind of, I’ve been programming when I was young, and I had a job between high school and university, and I was doing like C++ and Python stuff and just kind of messing around with code. And then like in late 2000s I was building my first Rails app, and I was like, “Wow, I’m incredibly productive right now”. And I had actually taken time off of writing code, but like I came back and picked up Rails, and all of a sudden I was ridiculously productive. And so without doing anything, my productivity had increased. And that was kind of this turning point for me when I realized that if you want to be a great developer, it’s not so much about your own ability to write code, but it’s about the tools that you use. And in the webspace, most of these tools are open-source, thankfully, right? It’s not like .NET where you have all these proprietary libraries. And so, clearly if you want to be a good developer, you should try always finding the best tools for the job. But the best developers are clearly the ones that wrote the tools, like DHH, like, those are developers like you and me. They wrote the– He started Rails, and clearly he is a great developer now because he wrote the tool. And so I sort of approached my career with that notion that I should not just try to use good tools, but I should also produce tools.

DeVaris: Hmm, okay.

Jo: Work on the tooling.

DeVaris: That’s a very good perspective. And that’s one of the things that as web development and JavaScript gets more popular, that people that are coming from these closed backgrounds like .NET are asking for the same type of tool-set, and see it getting there with certain IDEs and things like that, but definitely like the low-level tools that you create are useful, yeah.

Jo: Yeah, and I think, so I definitely think that just the libraries that we use for our production apps are clearly important, like active record on the back-end side, or Ember on the front-end side, what have you. But production code is not everything, right? And a lot of the code that we use is actually not going to run in production, but it’s still very, very important. So to give you an example actually, I recorded myself making a tiny library for NPM a while back, and so I kind of looked at the time breakdown, like what did I actually spend time on doing. And so reviewing these screen recordings of myself making the library, it turns out I spent like 90 minutes on the whole thing and 15 minutes were writing the actual production library code, and then there was 20 minutes writing documentation, and another 25 minutes writing tests. And I spent 10 minutes packaging up the whole thing and then doing some bookkeeping. And also like another 20 minutes or something on Twitter and IRC, just sort of bouncing things off of people and trying to figure out what the right approach was and how to get some edge cases right. So, less than a fifth of the time was actually spent writing the production code. And I think that kind of illustrates how important it is that we focus really on the whole tool chain that we use as developers. The whole, like our complete workflows, and not just the parts that are obviously production code.

DeVaris: Yeah, I definitely think that’s a good perspective. It’s interesting that you recorded yourself while you were programming, I mean, for the project. I suspect that if I were to do that, you probably wouldn’t see me on Twitter and IRC. I’d probably be like on Facebook, and then like Hacker News, and all types of other things in the middle.

Jo: It’s funny. When you record yourself you actually become a bit more disciplined because you don’t want to be embarrassed, right?

DeVaris: Yeah, that is true. That is very true. I mean, maybe some employer now is going to listen to this and going to try that, so I think we’ve let the cat out of the box.

Jo: Yeah, I think I actually spend way too much time on Twitter and Hacker News, and much of it is not productive, yeah.

DeVaris: Yeah, so what are the projects that you’re working on in the future, or that are going to released soon, or what are the open-source projects that you see in the community that have a lot of promise?

Jo: I don’t know right now.

DeVaris: Okay.

Jo: So, I don’t have anything in the pipeline coming up. So basically, I took several months off to work on Broccoli where I really just dedicated full-time work to it, and now I have to get back to some other stuff that will result in money.

DeVaris: Oh, okay. All right.

Jo: Right. Because I have to pay the rent. But I guess, like what I’m doing now is working part-time on Broccoli and pushing the plugin ecosystem forward, and there’s obviously some really exciting stuff in the Ember ecosystem coming up. I haven’t been following very closely, but I’m really excited about HTMLBars, and I think it’s going to really change the way we write applications and that we can write, at some point we’ll hopefully be able to write components, and not just entire apps. So I think my general plan is to work on my own stuff and figure out what other things bottleneck me now, now that I have a build tool. And then a year from now I’ll identify another thing where there is low-hanging fruit, where if we can solve a problem well, we can gain productivity again.

DeVaris: Right. That seems to be very good approach. And if anybody out there wants to help Jo pay the rent, you probably can contact her, obviously on Twitter, right? She’s there in IRC.

Jo: Yes. Twitter and IRC.

DeVaris: That’s pretty cool. All right, so, can you give us a little bit of a history about like how Broccoli came to be? I know you said that at the time there were some tools that were getting developed and most of them were Grunt based, but what was the thought process and how did Broccoli become something that, what was your approach behind it? Yeah, so, talk a little bit about that.

Jo: Yeah, so originally I started thinking about this kind of problem space, I guess at last year’s Ember conference, and the thing that was bothering me most was the lack of package management, sort of like Ruby Gems on the back-end side, or NPM for back-end JavaScript. And it turned out pretty quickly, as I started talking to people, that the package manager was not really the missing piece, but it was something sitting on top of a package manager that can tie all the files together, that tells you what to do with the files. So I think we have Bower as a pretty good package manager and that’ll be fine, but we need a tool like Broccoli sitting on top of it. And so at first I was really just like, I started talking to people about the problem and kind of exploring the problem space and trying to understand better what we needed to solve this. And a lot people very graciously kind of humored me and talked to me about this and spent time with me, even though I had no code at all to show for it. It was just all in my head, I guess. And then I, in the summer of 2013, I went to 5 different conferences and I kind of kept pitching this idea of a build tool at people and trying to figure like, is my approach the right one? Like, what kind of problems do we need to solve? And so that way I kind of iterated on the solution. And this whole process really, really drove home to me how important community is for making open-source software. Like, if you look at git-blame, it’s all me on Broccoli, basically. It’s mostly, I wrote the code, but it really synthesizes a lot of community knowledge about building. There were some very crucial insights that I got that I would’ve never gotten on my own, that came out of these conversations that happened in 2013. And even in 2014, I was on a conference call with several build-tool maintainers and we kind of chatted about, like, the challenges of how to design APIs for build tools and that kind of stuff. And some really important insights came out of that conversation as well, and really shaped what the Broccoli API looks like now. So I guess in a way, I kind of had this abstract notion that, yeah, programming is not just a solitary activity, but it’s about other people as well. But actually doing, writing Broccoli, actually going through this whole process really drove home to me like how important it is that we hang out together, that we talk a lot. And I think the time that we spend on Twitter talking about technical problems, sometimes it’s just procrastination, right, but it is also like a really important part of the development process.

DeVaris: No, I definitely agree. And I think that the collaboration and having people that you can, well intelligent people, and sometimes not intelligent, but people that aren’t aware of the problem space, will bounce some problems off of them so that you can get perspective to help on a direction when you’re kind of not sure you’re sure, but you’re not really clear about, is extremely helpful.

Jo: Exactly.

DeVaris: Yeah, so, quick question. How did you come up with name Broccoli?

Jo: It’s stands for browser compilation library.

DeVaris: Wow.

Jo: I am very proud of that.

DeVaris: Yeah, I would be too. Wow, that is very applicable.

Jo: Right, isn’t it.

DeVaris: Yeah, so how can people get involved with, or how do people, you said you were hoping to build out the Broccoli plugin components to Broccoli. Like, how do you suggest people start building plugins for that? Or what are the things that, I guess, are things that people have tried to ask for as part of, that should be included in the standard Broccoli package that you see more of as a plugin?

Jo: So there are several things that I think we need to work on. Obviously we want more plugins. And the biggest problem for writing plugins is always getting the performance right. So when you have, when you map input files one-to-one into output files like CoffeeScript where you have a .coffee file and you have .js file coming out. Getting the performance right for that is very easy because you know exactly when you have to rebuild a file. But for more complicated cases like Sass where one input file can import another input file, and that in turn can include another input file, it’s much trickier to figure out when you need to rebuild a given file. And I don’t think we’ve really figured that out really well, and it’s kind of inherent to this problem of building apps fast. So I think we will need more brainpower on that problem. In particular from, not just from people who work on build tools, but also from people working on the compilers themselves. Like Sass and LESS and these kinds of tools. Another thing that we’re working on is coming with a kind of default stack of tools. So that’s what Ember CLI is trying to do, and I have kind of come to realize that Broccoli is– I’m pretty confident about the architecture, but it’s a little bit too low-level of t a kind of people to just get started with it very easily, right? Like, it’s a bit confusing to use. And I think the fundamental problem is that it provides a very low-level abstraction of what you’re actually trying to do. And the Rails asset pipeline kind of falls on the other extreme of the spectrum. It doesn’t give you any way to customize what you’re doing, but you can very easily add plugins to it and new file formats just by putting them in your Jam file. So what I think what we are going to have to do eventually is come up with a sensible abstraction layer on top of Broccoli that give you a little less flexibility than Broccoli, but that makes it easier for people to get started to plug in new plugins into their build pipeline.

DeVaris: Yeah, I think that definitely would be helpful. So if anybody out there is having some pains with Broccoli and the configuration and things like that, you’re more than welcome to submit a pull request to the, what is the GitHub repo address, Jo?

Jo: It’s, so for Broccoli itself it’s joliss/broccoli, but I think also a really good place to discuss these things is the #ember-cli channel on Freenode.

DeVaris: Okay.

Jo: And a lot of people that care about the kind of tooling that we’re building on top of Broccoli hang out in that channel.

DeVaris: Cool. Well, Jo, I won’t keep you, but I really do appreciate you coming on the Hot Seat. It’s been a pleasure. I feel smarter already. Thank you so very much.

Jo: Thanks for having me.

DeVaris: No problem. So where can the people find you on the internets?

Jo: It’s @jo_liss on Twitter, and I guess joliss on IRC.

DeVaris: Okay.

Jo: And joliss42@gmail.com if you want to email me or IM me. I’m very happy for people to add me on IM as well.

DeVaris: Okay. And one more time for the solitaire website. I got to check this out by the way, this is crazy.

Jo: It’s solitr.com and it’s a super-simple business. It’s like less than 1000 lines of code.

DeVaris: Okay.

Jo: So I’m really proud of how unambitious it is.

DeVaris: Well I’m happy that that is working out for you. And again, thank you so very much. It’s been a pleasure and I’m really, really glad that I finally got you on the show.

Jo: Thank you, DeVaris.

DeVaris: Have a wonderful rest of the day

Jo: Thanks, you too.

DeVaris: All right.

Want more of this? Subscribe to the Ember Hot
Seat
to receive the latest episodes by email.

Read more at the source

What’s Next for Me (As of April 2014)

On a personal note, I just wanted to update everyone on what I’ve been up to:

Open Source

The past four months have been dedicated to working full time on
Broccoli (blog
post
), thanks to
some savings from my previous consulting gigs, combined with moderate living
expenses.

In retrospect, taking time off to write Broccoli was clearly worth it. I went
one month over my original budget of three months, but people’s enthusiastic
reactions at EmberConf have convinced me that I solved a worthwhile problem,
and that writing Broccoli will pay off in productivity increases across the
community. Personally, having Broccoli will help me with the upcoming work on
my business, and writing it has helped me become a better developer.

Starting a week from now, I will scale my open-source work back to part time,
to about 1–2 hours per day. There is still work to be done on Broccoli as well
as related projects like ember-cli, but it doesn’t require full-time
commitment from me anymore.

Building better community is another thing I’m planning to dedicate some time
to. I will blog about this soon.

My vague expectation is that in a year or so, I will find another worthwhile
problem to solve, and take some time off in a similar fashion. In the
meantime, it is time for me to work on my business.

Business: Solitr

SEO

Solitr started off as a weekend project, but then I
noticed it started getting actual traffic. I knew that the highest-ranked site
for “solitaire” gets over 100k daily
visits
, so
I figured that with ad-monetization alone, there’s probably a business there.

Since then, traffic to Solitr has risen to 4000 daily visits (mostly through
ranking for niche keywords), even though I haven’t been able to work much on
it so far. Some testing with AdSense indicates that I can start paying the
rent with it. That’s not much, but it feels quite liberating. It also makes me
confident that once the search ranking improves, I can have a viable business.

My plan is to hit 100k daily visitors by the end of 2015. Getting there will
require some SEO work, both on the main keyword (“solitaire”) and long tail
for niche keywords and i18n.

A/B Testing

I also want to learn A/B testing, in parallel to the SEO work. A/B testing
requires many data points to yield statistically significant results. Running
a free-to-play game presents a unique opportunity to get my hands dirty, as I
get a ludicrous number of data points very early in my business – the kind of
data that with a SaaS business you would only get at significant scale.

My role model for this is Patrick McKenzie
– my Solitr is approximately equivalent to his Bingo Card Creator – and his
extensive blogging has been of immeasurable value to me. I hope to pay it
forward by being open about Solitr and blogging about my adventures with A/B
testing in a similar vein.

I view the A/B testing work mostly as paid-for education. Perhaps I can
monetize it a bit by selling boutique consulting like Patrick. But more
importantly, I’ll have a valuable tool for scaling businesses in my
professional skill set. If I start a business in the future (say a B2B SaaS)
and it gets traction, I’m hoping that data-driven tools like A/B testing will
allow me to scale it and realize returns much faster than I’d be able to
otherwise.

That’s all for now. See you all soon, on Twitter and GitHub!

Read more at the source

What’s Next for Me (As of April 2014)

On a personal note, I just wanted to update everyone on what I’ve been up to:

Open Source

The past four months have been dedicated to working full time on
Broccoli (blog
post
), thanks to
some savings from my previous consulting gigs, combined with moderate living
expenses.

In retrospect, taking time off to write Broccoli was clearly worth it. I went
one month over my original budget of three months, but people’s enthusiastic
reactions at EmberConf have convinced me that I solved a worthwhile problem,
and that writing Broccoli will pay off in productivity increases across the
community. Personally, having Broccoli will help me with the upcoming work on
my business, and writing it has helped me become a better developer.

Starting a week from now, I will scale my open-source work back to part time,
to about 1–2 hours per day. There is still work to be done on Broccoli as well
as related projects like ember-cli, but it doesn’t require full-time
commitment from me anymore.

Building better community is another thing I’m planning to dedicate some time
to. I will blog about this soon.

My vague expectation is that in a year or so, I will find another worthwhile
problem to solve, and take some time off in a similar fashion. In the
meantime, it is time for me to work on my business.

Business: Solitr

SEO

Solitr started off as a weekend project, but then I
noticed it started getting actual traffic. I knew that the highest-ranked site
for “solitaire” gets over 100k daily
visits
, so
I figured that with ad-monetization alone, there’s probably a business there.

Since then, traffic to Solitr has risen to 4000 daily visits (mostly through
ranking for niche keywords), even though I haven’t been able to work much on
it so far. Some testing with AdSense indicates that I can start paying the
rent with it. That’s not much, but it feels quite liberating. It also makes me
confident that once the search ranking improves, I can have a viable business.

My plan is to hit 100k daily visitors by the end of 2015. Getting there will
require some SEO work, both on the main keyword (“solitaire”) and long tail
for niche keywords and i18n.

A/B Testing

I also want to learn A/B testing, in parallel to the SEO work. A/B testing
requires many data points to yield statistically significant results. Running
a free-to-play game presents a unique opportunity to get my hands dirty, as I
get a ludicrous number of data points very early in my business – the kind of
data that with a SaaS business you would only get at significant scale.

My role model for this is Patrick McKenzie
– my Solitr is approximately equivalent to his Bingo Card Creator – and his
extensive blogging has been of immeasurable value to me. I hope to pay it
forward by being open about Solitr and blogging about my adventures with A/B
testing in a similar vein.

I view the A/B testing work mostly as paid-for education. Perhaps I can
monetize it a bit by selling boutique consulting like Patrick. But more
importantly, I’ll have a valuable tool for scaling businesses in my
professional skill set. If I start a business in the future (say a B2B SaaS)
and it gets traction, I’m hoping that data-driven tools like A/B testing will
allow me to scale it and realize returns much faster than I’d be able to
otherwise.

That’s all for now. See you all soon, on Twitter and GitHub!

Read more at the source

Broccoli: First Beta Release

Broccoli is a new build tool. It’s
comparable to the Rails asset pipeline in scope, though it runs on Node and is
backend-agnostic.

After a long slew of 0.0.x alpha releases, I just pushed out the first beta
version, Broccoli 0.1.0.

Update March 2015: This post is still up-to-date with regard to
architectural considerations, but the syntax used in the examples is
outdated.

Table of Contents:

  1. Quick Example
  2. Motivation / Features
  3. Architecture
  4. Background / Larger Vision
  5. Comparison With Other Build Tools
  6. What’s Next

1. Quick Example

Here is a sample build definition file (Brocfile.js), presented without
commentary just to illustrate the syntax:

Brocfile.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
module.exports = function (broccoli) {
  var filterCoffeeScript = require('broccoli-coffee');
  var compileES6 = require('broccoli-es6-concatenator');

  var sourceTree = broccoli.makeTree('lib');
  sourceTree = filterCoffeeScript(sourceTree);

  var appJs = compileES6(sourceTree, {
    ...
    outputFile: '/assets/app.js'
  });

  var publicFiles = broccoli.makeTree('public');

  return [appJs, publicFiles];
};

Run broccoli serve to watch the source files and continuously serve the
build output on localhost. Broccoli is optimized to make broccoli serve as
fast as possible, so you should never experience rebuild pauses.

Run broccoli build dist to run a one-off build and place the build output in
the dist directory.

For a longer example, see the
broccoli-sample-app.

2. Motivation / Features

2.1. Fast Rebuilds

The most important concern when designing Broccoli was enabling fast
incremental rebuilds. Here’s why:

Let’s say you’re using Grunt to build an application written with
CoffeeScript, Sass, and a few more such compilers. As you develop, you want to
edit files and reload the browser, without having to manually rebuild each
time. So you use grunt watch, to rebuild automatically. But as your
application grows, the build gets slower. Within a few months of development
time, your edit-reload cycle has turned into an edit-wait-10-seconds-reload
cycle.

So to speed up your build, you try rebuilding only the files that have
changed. This is difficult, because sometimes one output file depends on
multiple input files. You manually configure some dependency rules, to rebuild
the right files depending on which files were modified. But Grunt was never
designed to do this well, and your custom rule set won’t reliably rebuild the
right files. Sometimes it rebuilds files when it doesn’t have to (making your
build slow). Worse, sometimes it doesn’t rebuild files when it should (making
your build unreliable).

With Broccoli, once you fire up broccoli serve, it will figure out by itself
which files to watch, and only rebuild those that need rebuilding.

In effect, this means that rebuilds tend to be O(1) constant-time with the
number of files in your application, as you generally only rebuild one file.
I’m aiming for under 200 ms per rebuild with a typical build stack, since that
type of delay feels near-instantaneous to the human brain, though anything up
to half a second is acceptable in my book.

2.2. Chainable Plugins

Another concern was making plugins composable. Let me show you how easy it
is to compile CoffeeScript and then minify the output with Broccoli.

1
2
3
4
var tree = broccoli.makeTree('lib')
tree = compileCoffeeScript(tree)
tree = uglifyJS(tree)
return tree

With Grunt, we’d have to create a temporary directory to store the
CoffeeScript output, as well as an output directory. As a result of all this
bookkeeping, Gruntfiles tend to grow rather lengthy. With Broccoli, all this
is handled automatically.

3. Architecture

For those who are curious, let me tell you about Broccoli’s architecture.

3.1. Trees, Not Files

Broccoli’s unit of abstraction to describe sources and build products is not a
file, but rather a tree – that is, a directory with files and subdirectories.
So it’s not file-goes-in-file-goes-out, it’s tree-goes-in-tree-goes-out.

If we designed Broccoli around individual files, we’d be able to compile
CoffeeScript just fine (as it compiles 1 input file into 1 output file), but
the API would be unnatural for compilers like Sass (which needs to read more
files as it encounters @import statements, and thus compiles n input files
into 1 output file).

On the other hand, with Broccoli’s design around trees, n:1 compilers like
Sass are no problem, while 1:1 compilers like CoffeeScript are an easily
expressible sub-case. In fact, we have a Filter base class for such 1:1
compilers to make them very easy to implement.

3.2. Plugins Just Return New Trees

This one is slightly more subtle: At first, I had designed Broccoli with two
primitives: a “tree”, which represents a directory with files, and a chainable
“transform”, which takes an input tree and returns a new compiled tree.

This implies that transforms map trees 1:1. Surprisingly, this is not a good
abstraction for all compilers. For instance, the Sass compiler has a notion of
“load paths” that it searches when it encounters an @import directive.
Similarly, JavaScript concatenators like r.js have a “paths” option to search
for imported modules. These load paths are ideally represented as a set of
“tree” objects.

As you can see, many real-world compilers actually map n trees into 1 tree.
The easiest way to support this is to let plugins deal with their input trees
themselves, thereby allowing them to take 0, 1, or n input trees.

But now that we let plugins handle their input trees, we don’t need to know
about compilers as first-class objects in Broccoli land anymore. Plugins
simply export functions that take zero or more input trees (and perhaps some
options), and return an object representing a new tree. For instance:

1
2
3
4
5
broccoli.makeTree('lib') // => a tree
compileCoffeeScript(tree) // => a tree
compileSass(tree, {
  loadPaths: [moreTrees, ...]
}) // => a tree

3.3. The File System Is The API

Remember that because Grunt doesn’t support chaining of plugins, we end up
having to manage temporary directories for intermediate build products in our
Grunt configurations, making them overly verbose and hard to maintain.

To avoid all this, our first intuition might be to abstract the file system
away into an in-memory API, representing trees as collections of streams. Gulp
for instance does this. I tried this in an early version of Broccoli, but it
turns out to make the code quite complicated: With streams, plugins now have
to worry about race conditions and deadlocks. Also, in addition to having a
notion of streams and paths, we need file attributes like last-modified time
and size in our API. And if we ever need the ability to re-read a file, or
seek, or memory-map, or if we need to pass an input tree to another process
we’re shelling out to, the stream API fails us and we have to write out the
entire tree to the file system first. So much complexity!

But wait. If we’re going to replicate just about every feature of the file
system, and in some cases we have to fall back to turning our in-memory
representation into an actual tree on the file system and back again, then …
why don’t we use the actual file system instead?

Node’s fs module already provides as compact an API to the file system as we
could wish for.

The only disadvantage is that we have to manage temporary directories behind
the scenes, and clean them up. But that’s easy to do in practice.

People sometimes worry that writing to disk is slower. But even if you hit the
actual disk drive (which thanks to paging is rare), the bandwidth of modern
SSDs has become so high compared to CPU speed that the overhead tends to be
negligible.

3.4. Caching, Not Partial Rebuilding

When I originally tried to solve the problem of incremental rebuilds, I tried
to devise a way to check whether each existing output file is stale, so that
Broccoli could trigger the rebuild for a subset of its input files. But this
“partial rebuild” approach requires that we are able to trace which files an
output file depends on, all the way back to the source files, and it also
makes file deletion tricky. “Partial rebuilds” is the classical approach of
Make, as well as the Rails asset pipeline, Rake::Pipeline, and Brunch, but
I’ve come to believe that it’s unnecessarily complicated.

Broccoli’s approach is much simpler: Ask each plugin to cache its build output
as appropriate. When we rebuild, start with a blank slate, and re-run the
entire build process. Plugins will be able to provide most of their output
from their caches, which takes near-zero time.

Broccoli started off providing some caching primitives, but it turned out
unnecessary to have this in the core API. Now we just make sure that the
general architecture doesn’t stand in the way of caching.

For plugins that map files 1:1, like the CoffeeScript compiler, we can
use common caching code (provided by the
broccoli-filter package), leaving
the plugin code looking
very simple.
Plugins that map files n:1, like Sass, need to be more careful about
invalidating their caches, so they need to provide custom caching logic. I
assume that we’ll still be able to extract some common caching logic in the
future.

3.5. No Parallelism

If we all suffer from slow builds, should we try to parallelize builds,
compiling multiple files in parallel?

My answer is no: The reason is that parallelism makes it possible to have
race conditions in plugins, which you might not notice until deploy time.
These are the worst kinds of bugs, and avoiding parallel execution eliminates
this entire class of bugs.

On the other hand, Amdahl’s law
stops us from gaining much performance through parallelizing. For a simplified
example, say our build process takes 16 seconds in total. Let’s say 50% of it
can be parallelized, and the rest needs to run in sequence (e.g.
CoffeeScript-then-concatenate-then-UglifyJS). If we run this on a 4-core
machine, the build would take 8 seconds for the sequential part plus 8 / 4 = 2
seconds for the parallel part, still totaling 10 seconds, less than a 40%
performance gain.

For incremental rebuilds, which constitute the hot path that we really care
about, caching tends to eliminate most of the parallelizable parts of the
build process anyway, so we are left with little to no performance gain.

Because of that, in general I believe that parallelizing the build process is
not a good trade. In principle you could write a Broccoli plugin that performs
some work in a parallel fashion. However, Broccoli’s primitives, as well as
the helper code that I’ve published on GitHub, actively encourage
deterministic sequential code patterns.

4. Background / Larger Vision

There are two main motivators that made me tackle writing a good build tool.

The first motivator is better productivity, through fast incremental rebuilds.

I generally believe that developer productivity is largely determined by the
quality of the libraries and tools we use. The “edit file, reload browser”
cycle that we perform hundreds of times a day is probably the core feedback
loop when we program. A great way to improve our tooling is getting this
edit-reload feedback loop to be as fast as humanly possible.

The second motivator is encouraging an ecosystem of front-end packages.

I believe that Bower and the ES6 module system will help us build a great
ecosystem, but Bower by itself is useless unless you have a build tool running
on top. This is because Bower is a content-agnostic transport tool that only
dumps all your dependencies (and their dependencies, recursively) into the
file system—it’s up to you what to do with them. Broccoli aims to become the
missing build tool sitting on top.

Note that Broccoli itself is angnostic about Bower or ES6 modules—you can use
it for whatever you like. (I am aware there are other stacks, like npm +
browserify, or npm + r.js.) I will discuss all of this in more detail in a
future blog post.

5. Comparison With Other Build Tools

If you are almost convinced but also wondering how other build tools stack up
against Broccoli, let me tell you why I wrote Broccoli instead of using any of
the following:

Grunt is a task runner, and it never set out to be a build tool. If you
try to (ab)use it as a build tool, you quickly find that because it doesn’t
attempt to handle chaining (composition), you end up having to manage
temporary directories for intermediate build products yourself, adding a lot
of complexity to your Grunt configuration. It also does not support reliable
incremental rebuilds, so your rebuilds will tend to be slow and/or unreliable;
see section “Fast Rebuilds” above.

That said, Grunt’s utility as a task runner is in providing a cross-platform
way to run shell-script type functionality, such as deploying your app or
generating scaffolding. Broccoli will be able to act as a Grunt plugin in the
future, so that you can call it from your Gruntfile.

Gulp tries to solve the problem of chaining plugins,
but in my view it gets the architecture wrong: Rather than passing around
trees, it passes around sequences (= event streams) of files (= streams or
buffers).
This works fine for cases where one input file maps into one output
file. But when a plugin needs to follow import statements, and thus needs to
access input files out of order, things get complicated.
For now, plugins that follow import statements tend to just just bypass the
build tool and read directly from the
file system
.
In the future, I hear that there will be helper libraries to turn all the
streams into a (virtual) file system and pass that to the compiler. I would
claim though that all this complexity is a symptom of an impedance mismatch
between the build tool and the compiler. See “Trees, Not Files” above for more
on this. I’m also not convinced that abstracting away files behind a stream or
buffer API is helpful at all; see “The File System Is The API” above.

Brunch, like Gulp, uses a file-based (not tree-based) in-memory API (see
this method
signature
).
Like with Gulp, plugins end up falling back to bypassing the build
tool

when they need to read more than one file.
Brunch also tries to do partial rebuilding rather than caching; see section
“Caching, Not Partial Rebuilding” above.

Rake::Pipeline is written in Ruby, which is less ubiquitous than Node in
front-end land. It tries to do partial rebuilds as well. Yehuda says it’s not
heavily maintained anymore, and that he’s betting on Broccoli.

The Rails asset pipeline uses partial rebuilds as well, and uses very
different code paths for development mode and production (precompilation)
mode, causing people to have unexpected issues when they deploy. More
importantly it’s tied to Rails as a backend.

6. What’s Next

The list of plugins is still
small. If they are enough for you, I cautiously recommend giving Broccoli a
try right now: https://github.com/joliss/broccoli#installation

I would like to see other people get involved in writing plugins. Wrapping
compilers is easy, but the hard and important part is getting caching and
performance right. We’ll also want to work on generalizing more caching
patterns in addition to
broccoli-filter, so that plugins
don’t suffer from excessive boilerplate.

Over the next week or two, my plan is to improve the documentation and clean
up the code base of Broccoli core and the plugins. We will also have to add a
test suite to Broccoli core, and figure out an elegant way to integration-test
Broccoli plugins against Broccoli core.
Another thing that’s missing with the existing plugins is source map support.
This is slightly complicated by performance considerations, as well as the
fact that chained plugins need to consume other plugins’ source maps and
interoperate properly, so I haven’t found the time to tackle this yet.

Broccoli will see active use in the Ember ecosystem, powering the default
stack emitted by ember-cli (an
upcoming tool similar in functionality to the rails command). We are also
hoping to move the build process used for generating the Ember core and
ember-data distributions from Rake::Pipeline and Grunt to Broccoli.

That said, I would love to see Broccoli adopted outside the Ember community as
well. JS MVC applications written with frameworks like Angular or Backbone, as
well as JavaScript and CSS libraries that require build steps, are all prime
candidates for being built by Broccoli.

I don’t currently see any major roadblocks on the path to Broccoli becoming
stable. By using it for real-world build scenarios, we should gain confidence
in its API, and I’m hoping that we can bump the version to 1.0.0 within a few
months’ time.

This blog post is the first comprehensive explanation of Broccoli’s
architecture, and the documentation is still somewhat sparse. I’m happy to
help you get started, and fix any bugs you encounter. Come find me on
#broccolijs on Freenode, or at
joliss42@gmail.com on Google Talk. I’ll also
respond to any issues you post on GitHub.

Thanks to Jonas Nicklas, Josef Brandl, Paul Miller, Erik Bryn, Yehuda Katz,
Jeff Felchner, Chris Willard, Joe Fiorini, Luke Melia, Andrew Davey, and Alex
Matchneer for reading and critiquing drafts of this post.

Discuss on Twitter

Read more at the source

Broccoli: First Beta Release

Broccoli is a new build tool. It’s
comparable to the Rails asset pipeline in scope, though it runs on Node and is
backend-agnostic.

After a long slew of 0.0.x alpha releases, I just pushed out the first beta
version, Broccoli 0.1.0.

Update March 2015: This post is still up-to-date with regard to
architectural considerations, but the syntax used in the examples is
outdated.

Table of Contents:

  1. Quick Example
  2. Motivation / Features
  3. Architecture
  4. Background / Larger Vision
  5. Comparison With Other Build Tools
  6. What’s Next

1. Quick Example

Here is a sample build definition file (Brocfile.js), presented without
commentary just to illustrate the syntax:

Brocfile.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
module.exports = function (broccoli) {
  var filterCoffeeScript = require('broccoli-coffee');
  var compileES6 = require('broccoli-es6-concatenator');

  var sourceTree = broccoli.makeTree('lib');
  sourceTree = filterCoffeeScript(sourceTree);

  var appJs = compileES6(sourceTree, {
    ...
    outputFile: '/assets/app.js'
  });

  var publicFiles = broccoli.makeTree('public');

  return [appJs, publicFiles];
};

Run broccoli serve to watch the source files and continuously serve the
build output on localhost. Broccoli is optimized to make broccoli serve as
fast as possible, so you should never experience rebuild pauses.

Run broccoli build dist to run a one-off build and place the build output in
the dist directory.

For a longer example, see the
broccoli-sample-app.

2. Motivation / Features

2.1. Fast Rebuilds

The most important concern when designing Broccoli was enabling fast
incremental rebuilds. Here’s why:

Let’s say you’re using Grunt to build an application written with
CoffeeScript, Sass, and a few more such compilers. As you develop, you want to
edit files and reload the browser, without having to manually rebuild each
time. So you use grunt watch, to rebuild automatically. But as your
application grows, the build gets slower. Within a few months of development
time, your edit-reload cycle has turned into an edit-wait-10-seconds-reload
cycle.

So to speed up your build, you try rebuilding only the files that have
changed. This is difficult, because sometimes one output file depends on
multiple input files. You manually configure some dependency rules, to rebuild
the right files depending on which files were modified. But Grunt was never
designed to do this well, and your custom rule set won’t reliably rebuild the
right files. Sometimes it rebuilds files when it doesn’t have to (making your
build slow). Worse, sometimes it doesn’t rebuild files when it should (making
your build unreliable).

With Broccoli, once you fire up broccoli serve, it will figure out by itself
which files to watch, and only rebuild those that need rebuilding.

In effect, this means that rebuilds tend to be O(1) constant-time with the
number of files in your application, as you generally only rebuild one file.
I’m aiming for under 200 ms per rebuild with a typical build stack, since that
type of delay feels near-instantaneous to the human brain, though anything up
to half a second is acceptable in my book.

2.2. Chainable Plugins

Another concern was making plugins composable. Let me show you how easy it
is to compile CoffeeScript and then minify the output with Broccoli.

1
2
3
4
var tree = broccoli.makeTree('lib')
tree = compileCoffeeScript(tree)
tree = uglifyJS(tree)
return tree

With Grunt, we’d have to create a temporary directory to store the
CoffeeScript output, as well as an output directory. As a result of all this
bookkeeping, Gruntfiles tend to grow rather lengthy. With Broccoli, all this
is handled automatically.

3. Architecture

For those who are curious, let me tell you about Broccoli’s architecture.

3.1. Trees, Not Files

Broccoli’s unit of abstraction to describe sources and build products is not a
file, but rather a tree – that is, a directory with files and subdirectories.
So it’s not file-goes-in-file-goes-out, it’s tree-goes-in-tree-goes-out.

If we designed Broccoli around individual files, we’d be able to compile
CoffeeScript just fine (as it compiles 1 input file into 1 output file), but
the API would be unnatural for compilers like Sass (which needs to read more
files as it encounters @import statements, and thus compiles n input files
into 1 output file).

On the other hand, with Broccoli’s design around trees, n:1 compilers like
Sass are no problem, while 1:1 compilers like CoffeeScript are an easily
expressible sub-case. In fact, we have a Filter base class for such 1:1
compilers to make them very easy to implement.

3.2. Plugins Just Return New Trees

This one is slightly more subtle: At first, I had designed Broccoli with two
primitives: a “tree”, which represents a directory with files, and a chainable
“transform”, which takes an input tree and returns a new compiled tree.

This implies that transforms map trees 1:1. Surprisingly, this is not a good
abstraction for all compilers. For instance, the Sass compiler has a notion of
“load paths” that it searches when it encounters an @import directive.
Similarly, JavaScript concatenators like r.js have a “paths” option to search
for imported modules. These load paths are ideally represented as a set of
“tree” objects.

As you can see, many real-world compilers actually map n trees into 1 tree.
The easiest way to support this is to let plugins deal with their input trees
themselves, thereby allowing them to take 0, 1, or n input trees.

But now that we let plugins handle their input trees, we don’t need to know
about compilers as first-class objects in Broccoli land anymore. Plugins
simply export functions that take zero or more input trees (and perhaps some
options), and return an object representing a new tree. For instance:

1
2
3
4
5
broccoli.makeTree('lib') // => a tree
compileCoffeeScript(tree) // => a tree
compileSass(tree, {
  loadPaths: [moreTrees, ...]
}) // => a tree

3.3. The File System Is The API

Remember that because Grunt doesn’t support chaining of plugins, we end up
having to manage temporary directories for intermediate build products in our
Grunt configurations, making them overly verbose and hard to maintain.

To avoid all this, our first intuition might be to abstract the file system
away into an in-memory API, representing trees as collections of streams. Gulp
for instance does this. I tried this in an early version of Broccoli, but it
turns out to make the code quite complicated: With streams, plugins now have
to worry about race conditions and deadlocks. Also, in addition to having a
notion of streams and paths, we need file attributes like last-modified time
and size in our API. And if we ever need the ability to re-read a file, or
seek, or memory-map, or if we need to pass an input tree to another process
we’re shelling out to, the stream API fails us and we have to write out the
entire tree to the file system first. So much complexity!

But wait. If we’re going to replicate just about every feature of the file
system, and in some cases we have to fall back to turning our in-memory
representation into an actual tree on the file system and back again, then …
why don’t we use the actual file system instead?

Node’s fs module already provides as compact an API to the file system as we
could wish for.

The only disadvantage is that we have to manage temporary directories behind
the scenes, and clean them up. But that’s easy to do in practice.

People sometimes worry that writing to disk is slower. But even if you hit the
actual disk drive (which thanks to paging is rare), the bandwidth of modern
SSDs has become so high compared to CPU speed that the overhead tends to be
negligible.

3.4. Caching, Not Partial Rebuilding

When I originally tried to solve the problem of incremental rebuilds, I tried
to devise a way to check whether each existing output file is stale, so that
Broccoli could trigger the rebuild for a subset of its input files. But this
“partial rebuild” approach requires that we are able to trace which files an
output file depends on, all the way back to the source files, and it also
makes file deletion tricky. “Partial rebuilds” is the classical approach of
Make, as well as the Rails asset pipeline, Rake::Pipeline, and Brunch, but
I’ve come to believe that it’s unnecessarily complicated.

Broccoli’s approach is much simpler: Ask each plugin to cache its build output
as appropriate. When we rebuild, start with a blank slate, and re-run the
entire build process. Plugins will be able to provide most of their output
from their caches, which takes near-zero time.

Broccoli started off providing some caching primitives, but it turned out
unnecessary to have this in the core API. Now we just make sure that the
general architecture doesn’t stand in the way of caching.

For plugins that map files 1:1, like the CoffeeScript compiler, we can
use common caching code (provided by the
broccoli-filter package), leaving
the plugin code looking
very simple.
Plugins that map files n:1, like Sass, need to be more careful about
invalidating their caches, so they need to provide custom caching logic. I
assume that we’ll still be able to extract some common caching logic in the
future.

3.5. No Parallelism

If we all suffer from slow builds, should we try to parallelize builds,
compiling multiple files in parallel?

My answer is no: The reason is that parallelism makes it possible to have
race conditions in plugins, which you might not notice until deploy time.
These are the worst kinds of bugs, and avoiding parallel execution eliminates
this entire class of bugs.

On the other hand, Amdahl’s law
stops us from gaining much performance through parallelizing. For a simplified
example, say our build process takes 16 seconds in total. Let’s say 50% of it
can be parallelized, and the rest needs to run in sequence (e.g.
CoffeeScript-then-concatenate-then-UglifyJS). If we run this on a 4-core
machine, the build would take 8 seconds for the sequential part plus 8 / 4 = 2
seconds for the parallel part, still totaling 10 seconds, less than a 40%
performance gain.

For incremental rebuilds, which constitute the hot path that we really care
about, caching tends to eliminate most of the parallelizable parts of the
build process anyway, so we are left with little to no performance gain.

Because of that, in general I believe that parallelizing the build process is
not a good trade. In principle you could write a Broccoli plugin that performs
some work in a parallel fashion. However, Broccoli’s primitives, as well as
the helper code that I’ve published on GitHub, actively encourage
deterministic sequential code patterns.

4. Background / Larger Vision

There are two main motivators that made me tackle writing a good build tool.

The first motivator is better productivity, through fast incremental rebuilds.

I generally believe that developer productivity is largely determined by the
quality of the libraries and tools we use. The “edit file, reload browser”
cycle that we perform hundreds of times a day is probably the core feedback
loop when we program. A great way to improve our tooling is getting this
edit-reload feedback loop to be as fast as humanly possible.

The second motivator is encouraging an ecosystem of front-end packages.

I believe that Bower and the ES6 module system will help us build a great
ecosystem, but Bower by itself is useless unless you have a build tool running
on top. This is because Bower is a content-agnostic transport tool that only
dumps all your dependencies (and their dependencies, recursively) into the
file system—it’s up to you what to do with them. Broccoli aims to become the
missing build tool sitting on top.

Note that Broccoli itself is angnostic about Bower or ES6 modules—you can use
it for whatever you like. (I am aware there are other stacks, like npm +
browserify, or npm + r.js.) I will discuss all of this in more detail in a
future blog post.

5. Comparison With Other Build Tools

If you are almost convinced but also wondering how other build tools stack up
against Broccoli, let me tell you why I wrote Broccoli instead of using any of
the following:

Grunt is a task runner, and it never set out to be a build tool. If you
try to (ab)use it as a build tool, you quickly find that because it doesn’t
attempt to handle chaining (composition), you end up having to manage
temporary directories for intermediate build products yourself, adding a lot
of complexity to your Grunt configuration. It also does not support reliable
incremental rebuilds, so your rebuilds will tend to be slow and/or unreliable;
see section “Fast Rebuilds” above.

That said, Grunt’s utility as a task runner is in providing a cross-platform
way to run shell-script type functionality, such as deploying your app or
generating scaffolding. Broccoli will be able to act as a Grunt plugin in the
future, so that you can call it from your Gruntfile.

Gulp tries to solve the problem of chaining plugins,
but in my view it gets the architecture wrong: Rather than passing around
trees, it passes around sequences (= event streams) of files (= streams or
buffers).
This works fine for cases where one input file maps into one output
file. But when a plugin needs to follow import statements, and thus needs to
access input files out of order, things get complicated.
For now, plugins that follow import statements tend to just just bypass the
build tool and read directly from the
file system
.
In the future, I hear that there will be helper libraries to turn all the
streams into a (virtual) file system and pass that to the compiler. I would
claim though that all this complexity is a symptom of an impedance mismatch
between the build tool and the compiler. See “Trees, Not Files” above for more
on this. I’m also not convinced that abstracting away files behind a stream or
buffer API is helpful at all; see “The File System Is The API” above.

Brunch, like Gulp, uses a file-based (not tree-based) in-memory API (see
this method
signature
).
Like with Gulp, plugins end up falling back to bypassing the build
tool

when they need to read more than one file.
Brunch also tries to do partial rebuilding rather than caching; see section
“Caching, Not Partial Rebuilding” above.

Rake::Pipeline is written in Ruby, which is less ubiquitous than Node in
front-end land. It tries to do partial rebuilds as well. Yehuda says it’s not
heavily maintained anymore, and that he’s betting on Broccoli.

The Rails asset pipeline uses partial rebuilds as well, and uses very
different code paths for development mode and production (precompilation)
mode, causing people to have unexpected issues when they deploy. More
importantly it’s tied to Rails as a backend.

6. What’s Next

The list of plugins is still
small. If they are enough for you, I cautiously recommend giving Broccoli a
try right now: https://github.com/joliss/broccoli#installation

I would like to see other people get involved in writing plugins. Wrapping
compilers is easy, but the hard and important part is getting caching and
performance right. We’ll also want to work on generalizing more caching
patterns in addition to
broccoli-filter, so that plugins
don’t suffer from excessive boilerplate.

Over the next week or two, my plan is to improve the documentation and clean
up the code base of Broccoli core and the plugins. We will also have to add a
test suite to Broccoli core, and figure out an elegant way to integration-test
Broccoli plugins against Broccoli core.
Another thing that’s missing with the existing plugins is source map support.
This is slightly complicated by performance considerations, as well as the
fact that chained plugins need to consume other plugins’ source maps and
interoperate properly, so I haven’t found the time to tackle this yet.

Broccoli will see active use in the Ember ecosystem, powering the default
stack emitted by ember-cli (an
upcoming tool similar in functionality to the rails command). We are also
hoping to move the build process used for generating the Ember core and
ember-data distributions from Rake::Pipeline and Grunt to Broccoli.

That said, I would love to see Broccoli adopted outside the Ember community as
well. JS MVC applications written with frameworks like Angular or Backbone, as
well as JavaScript and CSS libraries that require build steps, are all prime
candidates for being built by Broccoli.

I don’t currently see any major roadblocks on the path to Broccoli becoming
stable. By using it for real-world build scenarios, we should gain confidence
in its API, and I’m hoping that we can bump the version to 1.0.0 within a few
months’ time.

This blog post is the first comprehensive explanation of Broccoli’s
architecture, and the documentation is still somewhat sparse. I’m happy to
help you get started, and fix any bugs you encounter. Come find me on
#broccolijs on Freenode, or at
joliss42@gmail.com on Google Talk. I’ll also
respond to any issues you post on GitHub.

Thanks to Jonas Nicklas, Josef Brandl, Paul Miller, Erik Bryn, Yehuda Katz,
Jeff Felchner, Chris Willard, Joe Fiorini, Luke Melia, Andrew Davey, and Alex
Matchneer for reading and critiquing drafts of this post.

Discuss on Twitter

Read more at the source

The State of Libsass (versus Ruby Sass)

Libsass is a C++ re-implementation of
the Ruby-based Sass compiler. It’s an order of
magnitude

faster than Ruby Sass, but hasn’t seen as much adoption yet.

I recently asked Aaron Leung, the current
maintainer, about the state of libsass. He kindly agreed to let me publish his
response, rendered below. Shout-out to his employer
Moovweb, who, Aaron lets me know,
have been paying him to work on libsass.

Hi Jo,

I appreciate your interest in LibSass! It’s been very stable – the company I work for uses it internally in our SDK and build server, and it’s been running fine. Unfortunately, in terms of features, it hasn’t fully caught up with the Ruby implementation yet. The biggest deficiency right now is that @extend isn’t fully functional – LibSass works correctly for more basic uses of @extend, but a full implementation is still 2-3 months out (it’s a very complex feature).

Off the top of my head, other incompatibilities include:
* no 3.3 features yet
* full UTF-8 support is in the works (hopefully will be done in a week or two)
* @media blocks are following the scoping rules of pre-3.2 Sass
* limited support for the new CSS filter functions
* not properly handling namespaced selectors

In addition to those, there are miscellaneous little bugs that cause LibSass to generate incorrect output, mostly in obscure edge-casey stuff. However, thanks to a recent spike in community interest, I’ve been working harder to get all these things sorted out, and my employer is also interested in seeing LibSass become every bit as good as the Ruby version. Thanks to all that, as well as taking a closer look at the Ruby code, I’m hoping to get completely caught up with the Ruby version by mid-summer.

Hope this helps, and let me know if you have any more questions!

Aaron

He adds:

I should also let you know that we’re working on a more detailed feature/compatibility grid. More generally, we’re trying to be better about staying in touch with the community this year (via blog posts, talks at conferences & meetups, etc), so I’ll try to keep you apprised of any LibSass progress.

P.S. Keep in mind that the “mid-summer 2014” estimate above was given in
private email to me, and is not an official commitment on anyone’s part. ;-)

Discuss on Twitter

Read more at the source

The State of Libsass (versus Ruby Sass)

Libsass is a C++ re-implementation of
the Ruby-based Sass compiler. It’s an order of
magnitude

faster than Ruby Sass, but hasn’t seen as much adoption yet.

I recently asked Aaron Leung, the current
maintainer, about the state of libsass. He kindly agreed to let me publish his
response, rendered below. Shout-out to his employer
Moovweb, who, Aaron lets me know,
have been paying him to work on libsass.

Hi Jo,

I appreciate your interest in LibSass! It’s been very stable – the company I work for uses it internally in our SDK and build server, and it’s been running fine. Unfortunately, in terms of features, it hasn’t fully caught up with the Ruby implementation yet. The biggest deficiency right now is that @extend isn’t fully functional – LibSass works correctly for more basic uses of @extend, but a full implementation is still 2-3 months out (it’s a very complex feature).

Off the top of my head, other incompatibilities include:
* no 3.3 features yet
* full UTF-8 support is in the works (hopefully will be done in a week or two)
* @media blocks are following the scoping rules of pre-3.2 Sass
* limited support for the new CSS filter functions
* not properly handling namespaced selectors

In addition to those, there are miscellaneous little bugs that cause LibSass to generate incorrect output, mostly in obscure edge-casey stuff. However, thanks to a recent spike in community interest, I’ve been working harder to get all these things sorted out, and my employer is also interested in seeing LibSass become every bit as good as the Ruby version. Thanks to all that, as well as taking a closer look at the Ruby code, I’m hoping to get completely caught up with the Ruby version by mid-summer.

Hope this helps, and let me know if you have any more questions!

Aaron

He adds:

I should also let you know that we’re working on a more detailed feature/compatibility grid. More generally, we’re trying to be better about staying in touch with the community this year (via blog posts, talks at conferences & meetups, etc), so I’ll try to keep you apprised of any LibSass progress.

P.S. Keep in mind that the “mid-summer 2014” estimate above was given in
private email to me, and is not an official commitment on anyone’s part. ;-)

Discuss on Twitter

Read more at the source

CSS Preprocessor Benchmark

Results

The timings for processing a 200 KB file of plain CSS (lower is better):

Libsass (a C++ implementation of Sass) and Rework turn out to be extremely
fast.

Background

The speed of your CSS preprocessor is important for developer/designer
ergonomics. The preprocessing time measured by this benchmark will typically
incur as a delay every time you edit the stylesheet sources and hit reload
in the browser. Delays below 0.2 to 0.5 seconds tend to be perceived by
the human brain as near-instantaneous. The higher the delay, the higher the
mental overhead.

To see how fast preprocessors are, I recently ran an informal benchmark of CSS
preprocessors. The benchmark garnered a lot of
interest, so I’m
posting this write-up in blog form.

Methods

I chose 200 KB of minified ungzipped CSS to simulate a large-ish web
application, because that’s approximately the amount that the Yahoo homepage
uses.

The file contents were artificially simple:

1
2
3
4
.x0{color:green;}
.x1{color:green;}
...
.x9999{color:green;}

Benchmark script: css-preprocessor-benchmark.sh

I used Ruby 2.1.0p0, Node 0.10.24, and the master branches of each library
(Sass 767c53f, libsass 991bc26, Stylus f539ef0, Rework c60edd3, less.js
281a895, r.js 84f325c).

Discussion

Libsass, Rework, LESS, and r.js came in below the 0.5 second threshold.

Ruby Sass and libsass

Libsass, a re-implementation of Sass in C++, provides a massive (>10x) speed
improvement over the original Sass written in Ruby, dropping processing time
from 2.5 seconds to 0.2 seconds. It is not quite a drop-in replacement for
Sass yet, but it’s getting a lot of development love from Aaron
Leung
. It might be worth considering for new
applications.

People’s biggest worry about libsass seems to be losing the Ruby-based
Compass. Perhaps there will be an
alternative/re-implementation of Compass that is not Ruby-based. But even
without that, we might think about whether the massive speed increase of
libsass might justify going without Compass completely.

Given the speed difference, I’d like to suggest that the move from Ruby Sass
to libsass is inevitable, and I hope that it will see copious contributions
from the community to bring it up to par.

Update: Read Aaron’s notes on the state of libsass.

Others

A surprising contender is the newer Rework coming in at only 0.2 seconds,
written in JavaScript but nearly as fast as libsass. LESS.js also did fairly
well at 0.5 seconds, while the third JavaScript-based preprocessor, Stylus,
fell on the slow side with 1.7 seconds. Perhaps it can be improved though.

I also included r.js, which was the fastest of all. While it’s not strictly a
preprocessor, it can inline @import statements, and so it can be used to
concatenate your project’s plain CSS files (if you are the minimalist type).

Discuss on Twitter

Read more at the source
close