How to globally change the font size in IntelliJ IDEA

Today I learned… that you can change the font size in IntelliJ IDEA across all editor tabs and consistently! It’s not a missing feature and you don’t have to rely on the awkward trackpad zoom feature.

Here’s how you do it (on a Mac, anyway).

Go to IntelliJ IDEA —> Preferences —> Editor —> Colors & Fonts —> Font

Screen Shot 2015-02-19 at 9.05.28 AM

On the right, look for Scheme name.

Screen Shot 2015-02-19 at 9.04.34 AM

Click the Save As… button and give it a unique name.

Screen Shot 2015-02-19 at 9.03.47 AM

This step is essential! If you don’t do this step, changes you make to Size further down won’t stick, and IntelliJ won’t tell you why. (The popup is titled “Save Color Scheme” so it’s easy to see why users might overlook its effect on the ability to save font size.)

Now alter the Size setting and you should see the typeface preview change to reflect your input. Press OK and you’re done!

Screen Shot 2015-02-19 at 9.06.31 AM

Googling this problem, I found some conflicting answers about how to increase IntelliJ’s font size.

Many IntelliJ users recommend the “trackpad zoom” solution (which you have to enable in Preferences —> General —> Change font size (Zoom) with Command+ Mouse Wheel), but relying on zooming in results in a patchwork of mismatched zoom levels across all your tabs, and you have to manually re-zoom each tab. I wanted a consistently larger font-size across all tabs, with no manual zoom step, and this did the trick!

AngularJS: Chaining multiple functions in one ng-click

Today I learned… a little trick for performing multiple functions in a single ng-click. Just separate them with a semicolon (;) like so:

<button ng-click="selectTab(); $parent.someVar = true">Button Text</button>

This comes with a noticeable caveat: it complicates your template code. Generally, it’s considered good practice to minimize the amount of logic that happens in an html template. If you need to do many things on a single ng-click, you should consider writing (or refactoring) a method in your controller to handle them with just one method call.

Nonetheless, this odd bit of Angular syntax can be useful, even if it never makes it to production. In my case, I needed to modify $parent.someVar on click, which was (at the time) outside of the button’s controller. Ultimately, this code was refactored so that someVar could be modified from within selectTab(), but when I needed a quick and dirty implementation to demo something, chaining functions on a single ng-click got the job done.

Wait, what does $parent.someVar do? What is $parent?

$parent allows code within a child controller to access something contained within the parent scope.

In my project I had multiple scopes.

<div ng-controller="PageCtrl">
   <div ng-controller="SectionCtrl">
      <button ng-click="selectTab()">Button Text</button>
   </div>
</div>

someVar was contained within PageCtrl (the parent scope), but I needed to manipulate it from a button inside SectionCtrl (the child scope). Using $parent, the SectionCtrl code could “look up” into the parent and find someVar. This Stack Overflow Q&A explains $parent with more examples.

Git: How to automatically add the branch name to the end of every commit message

Today I learned… that git can be customized in a number of ways by using its hooks!

On my engineering team, it’s our convention to name feature branches after their corresponding JIRA issues. Likewise, we include the branch (issue) name in every commit message.

$ git commit -m "Fixed the thingy jira/story/ourproject-4555"

Including the branch name / issue name like this means the commit will show up in JIRA linked from the issue itself! Pretty cool. I’ve worked with JIRA a lot over the years, but I’ve never been on a team that actually integrates it with git like this, and the organizational benefits are worth the extra effort.

Alas, it is all too easy to forget to add the branch name at the end of a commit, and I got tired of amending my git messages (which is easy enough: git commit –amend will do it). I knew there had to be a way to automate this.

Adding a git hook

  1. Go into your project’s repo
  2. Open ./git/hooks directory (remember, .git is a hidden directory by default)
  3. Make a copy of .git/hooks/prepare-commit-msg.sample, paste it into the same folder, and remove the .sample extension. You should have a file named simply prepare-commit-msg with no extension.
  4. Paste this into the new file:
# Automatically adds branch name to the end of every commit message.

NAME=$(git branch | grep '*' | sed 's/* //')
echo $(cat "$1") "$NAME" > "$1"

The first line gets the git branch name and sets it to “NAME”.

The second line echoes the contents of your commit message (represented as $1), adds “NAME” to it, and replaces $1 with your message + NAME.

Just save the file and try it out in bash now! No need to reload bash or source anything.

Here’s the gist

# Automatically adds branch name to the end of every commit message. Handy for git/JIRA integrations.

# Make a copy of .git/hooks/prepare-commit-msg.sample in your repo and name the file prepare-commit-msg (remove "sample" extension)
# Paste the following into the file, save the file, and try it out in bash

NAME=$(git branch | grep '*' | sed 's/* //') 
echo $(cat "$1") "$NAME" > "$1"

More examples

There are many ways to approach this problem, and you can do a lot of sophisticated things with Git hooks. Here are some resources I found helpful in learning about git hooks and git in general:

Blogging for bucks: Year 1 report, mistakes made, lessons learned

Today I’m going to talk about my blogs and how they did this year. Since this is the first post in what I hope will become an annual (or even more regular) series, I’m going to share a long-winded history of my blogs, too, so that you can learn what I did and what my first year of blogging was like.

Just to set expectations, I didn’t make a mint in 2014. I made around $3500 this year off Amazon Affiliate and Google Adsense combined. Nearly half of that amount was made in the last three months of the year. To put the amount in perspective, we spent about that same amount on food in 2014 (for two adults). Or, it’s 10 payments on a car leased for $350 a month. Hey, I’ll take it! I know there are SEO wizards making enough to buy a new Lamborghini every day with their blogs, and that’s awesome, but that didn’t happen to me (yet :D).

However, everything I did is 100% doable by you if you’d like to try out blogging for a little side income!

Backstory

I’ve been putting content online since about 1997. My early efforts were what you’d expect from a 13 year old: a Tamagotchi fan site on Tripod, an AOL site dedicated to my dog, my Sailor Moon fan art. It was so rewarding to share stuff I cared about and find (small) audiences for it! I even made some really good friends through my sites and artwork.

Monetizing the content I made, however, never really occurred to me. I suppose I might have said, “Who would pay me for this? The Internet is full of free stuff!”

Sharing stuff online became nearly everyone’s hobby as the Internet’s popularity exploded and sharing stuff became easier and easier (and this is awesome – no matter how obscure a thing is, 99.9% of the time I can Google it and find someone talking about it).

Nonetheless, in 2013 I found inspiration in the works of bloggers like Young House Love (now retired?) and Smart Passive Income (written by a smart guy who makes a lot of money online by telling others how to make money online).

The formula couldn’t be simpler: build a site on a profitable topic, get traffic, earn money through affiliate sales. Sounds good to me!

First Blogs

April 2011: House Blog

A few years ago I bought a house that needed a lot of work done. I thought it would be fun to write about it and document the projects. (It was.) I bought a domain and put a WordPress blog on it.

House Blog was all over the place in terms of content, and it certainly wasn’t set up to make money. No ads or affiliate links. I was giving my posts “clever” titles, not SEO-friendly titles. And I certainly wasn’t writing with revenue in mind.

House Blog just sat there, getting a new post whenever I felt like it (maybe once a month) but it had a nice little trickle of 20-30 visitors a day.

The blog was 2 years old when I got interested in “passive income” from blogging. I discovered the Amazon Affiliate program through a couple other blogs I followed. I wanted to see if it would work for me, so I re-wrote a couple of my older House Blog articles (and wrote a few new ones) to include Amazon Affiliate links to appropriate products.

Within a week, I had my first sale! :O

I made about 30 cents off it (lol). But that was enough to convince me the formula worked and I could scale it from here.

I decided to start a new blog (and keep House Blog, of course). This one would be more focused. Rather than try to be an all-over-the-place home renovation blog (which was seriously hard to generate content for: how often do you replace a toilet?), I’d choose fewer topics and discuss them in excruciating detail (which I enjoy).

July 2013: Craft Blog

Actually, what happened was I started a whole bunch of blogs, all centered around different topics. (Sigh)

Craft Blog is the only one of that batch that has made any money to this day, so I’ll just talk about it. (Lesson: don’t run out and buy 5 domains the second you have a few ideas.)

Craft Blog’s domain was purchased in the first week of July 2013. I put some lorem ipsum content up while I enthusiastically spent an entire weekend customizing the theme (another mistake: it’s not necessary to spend hours – or days – customizing the theme until you have steady traffic). On the bright side, I learned a lot about WordPress and CSS during those early days of obsessing over the site’s design.

Setting up a new WordPress site, customizing it, and planning its articles gave me a much-needed creative outlet. I wasn’t challenged in my day job, so having this site to look forward to at the end of the day was very exciting and motivating.

A few days later, a visitor came to the site! Oh no! I had spent all my time customizing the site’s design. The only content on the site was placeholder junk!

I loved working on Craft Blog, so banging out a few pages of content that fit the site’s niche was easy and fun. For my “monetization” articles, I reviewed stuff I either owned or had used, and I made recommendations based on my own personal wishlist and research. (If there’s one thing I love more than buying new toys, it’s researching those new toys for weeks prior.) I added Amazon links to help visitors find the stuff I was talking about.

Of course, I had lots of ideas for articles that weren’t Amazon-oriented, and I wrote those, too. I love writing tutorials, so I also put some nice Photoshop and Etsy tutorials on the site. End result: the site looked better for the 1-2 visitors it was getting a day.

By August, I had 12 good articles. Some were long (1500 words), some shorter (500 words), but all were nice original content I wrote myself. There was very little traffic at this time, maybe 25 visitors the whole month.

I did a Pinterest blitz around this time, hoping to “go viral” there, but that never happened. I don’t enjoy Pinterest, so I was pretty quick to let that part of my blog marketing slide. In fact, I let most of my marketing efforts slide. I might be allergic to self-promotion. I found it too difficult to do a lot of the stuff the experts say you should do to market your blog, and I was content, at least for now, to just write useful content and post it on the site.

By September 2013, I was up to 22 articles on Craft Blog. I only had 147 visitors total that month (about 3 months into the site’s life), and no sales, but I was enjoying the project so much I just kept going through October when I decided to start a new site and let Craft Blog coast for a while.

October 2013: Disney Ride Blog

In September 2013 I went to Disneyland. It was great! I caught a bad cold when I returned home, though, and had nothing to do but lay in bed and think about how much fun I had at Disneyland. I decided to start a blog about my favorite Disney ride. It would just be a nice place to collect all the history, legends, and secrets of the ride in one place. I bought a domain and wrote a couple short articles about the ride and what I knew about it.

Within a few days, the site was getting traffic.

Whoa, what?!

In its first 30 days, Disney Ride Blog got more traffic than Craft Blog had had in its entire 4-month life. This inspired an obsession with the site, which I worked on regularly for the next two months. By January 2014, the site had its first 100 visitor day!

The site was a traffic monster, but it was making almost no money (literally pennies a day, if anything).

I think the site stood out because of a lack of competition. Lesson: if no one else is in a niche, maybe that’s because there’s no money in it? :D

Unfortunately, while Disney Ride Blog was my strongest traffic site almost until the end of 2014, it has barely made a dollar. I love the site and it’s easily one of my favorite hobbies, but it just eats up bandwidth and brings in nothing. I’ve experimented with ad placement, Affiliate links, etc, but it’s stubbornly unprofitable.

Fortunately, Craft Blog got a few clicks and sales before the end of 2013, which gave me some ideas as to what to do next for making a “money” blog.

2013 blog income: $17.16

Yup, $17.16 for three sites over the whole year. Rollin’ in it.

2014 in Review

January 2014: Gizmo Blog

Six months into this “blogging for bucks” endeavor I had been writing content for three blogs fairly regularly and seen traffic grow accordingly, but between Adsense and Amazon Affiliate I had made a grand total of about $17 for the entire year. I think a lot of people would have called it quits at this point, because that’s a pretty embarrassing return on investment.

Not me! Haha, I bought a new domain in January, this one for a particular category of “smart home” gadgets I was very interested in.

Gizmo Blog would be the culmination of all my learning thus far: niche topic that not many people were covering in great detail yet with a focus on customers on the verge of making a purchase (and Amazon Affiliate links to guide them to the product’s Amazon page).

With one comparison article (including a comparison chart and about 1500 words of original written content), the site had its first visitors from Google and its first Amazon link click within 7 days (!!!).

January 2014 blog income: $8.66

Hey, that’s nearly half of what I made over half of last year!

I added a few more articles to Gizmo Blog cover the basics and then decided to let it sit. In the meantime, I did some footwork: I went to hardware stores and even an open house to see the products in person, since it would be impractical for me to install all of them into my own home. This helped me write smarter “hands on” reviews. It helps that I genuinely find the technology interesting – I can’t imagine writing a site like this without loving the thing you’re writing about.

February 2014: Double digit earnings!

February went better: as traffic grew (1660 visitors across all sites!), so did clicks to Amazon. I had my first double-digit earnings month. I didn’t do anything special or get any links, I just added a few new (long) articles to each site. From humble beginnings…

February 2014 blog income: $35.63

May 2014: First $100 month

Three important things happened between February and May 2014:

Thing 1: One of the manufacturers of a product I reviewed on Gizmo Blog tweeted a link to the review! This brought in a surge of traffic (58 in a day, woohoo!) and seemed to legitimize the site a bit in the eyes of Google because from this point on, traffic kept climbing – sometimes doubling with each passing month.

Thing 2: In May 2014, someone linked to one of my reviews on a forum, which is like a gift that just keeps giving because it not only sends regular traffic, it counts as a quality backlink in Google’s ranking algorithm.

Thing 3: I quit my day job and focused all my time on blogging and growing my web developer skills.

May 2014 blog income: $115.15

During the spring I also made a better effort at marketing the blogs. I created a Facebook, Twitter, Pinterest, and Google+ presence for all of them and re-tweeted / re-posted content to them regularly (1-2 times a week) for a while.

I don’t know how to make something “go viral” and I don’t enjoy spending a lot of time on social media doing what feels like an elaborate “look at me” routine, so again, I let marketing efforts fade out after a while.

Traffic grew steadily across all sites, which was great. Disney Ride Blog remained by far my strongest traffic-puller, but my weakest earner.

June 2014: My host complains

My long-time web host (Lunarpages, since 2005!) served me with a ticket and a complaint that my sites were consuming too many resources. This started my mad scramble to optimize my sites. Ultimately, I added W3 Total Cache to my sites which took a load off the servers (for a while – see December 2014). Without caching, each of my sites was re-loading all of its resources every time users navigated around the site.

At the time, I was surprised that WordPress didn’t come with optimizations built-in. It’s up to the user to add things like caching, lazy loading of images, minification of CSS and JS, automatic backups, and security measures like limits to login attempts.

Lesson: optimization becomes very important once your site(s) are bringing in around 50+ visitors a day.

October 2014: Holiday season begins

The summer was very good by my beginner standards: about $140 a month on average for June, July, August, and September. At least, until October raised the bar.

I added zero content between July and September, thanks to attending the full time Code Fellows Dev Accelerator and having negative free time for blogging.

Yet traffic continued to grow! The best part was, for the first time, my blogs were truly earning “passively” since I had done absolutely nothing on them. October really turned things around – nearly $600 in earnings, plus I’d done nothing on the sites in months.

October 2014 blog income: $594.19!

This sudden uptick in earnings inspired me to put those new web dev skills to use and optimize all the sites for mobile, which may explain at least some of the traffic increase that I saw in November and December. Disney Ride Blog and Gizmo Blog had been virtually unusable on mobile, now they work great on phones and tablets.

December 2014: Shopping season ends, host complains again

If October was great, then November was spectacular: $870 in earnings! I’ve read from other bloggers that the last three months of the year are the best for Amazon Affiliates and ecommerce sites, and I believe it. What a great way to end the year!

rue Color Image

However, I expect the earnings to fall back to their summer levels (if not lower) as soon as Christmas comes and goes.

Alas, my host complained again in December. They don’t have a problem with my bandwidth usage (that’s “unlimited”!) but they do have a problem with CPU resources being used, some of which are used every time someone visits one of my sites. With average daily visitors now around 3,000 for my sites combined, I was hitting it too hard for them.

Unfortunately, traffic peaked at the same time some bot network attacked many of my sites (or maybe the bots were always there, but weren’t causing any trouble until traffic reached a certain point). From the logs, it looks like at least one bot was trying to get in through wp-admin, and another seemed to be hitting the comments functionality of several of my sites.

I played wack-a-mole for 5 days trying to solve or at lease reduce the effect of the various attacks across my many sites. I banned IPs, tweaked cache settings on my WordPress sites, added login attempt limiting plugins, turned off plugins, turned on and off themes… no one thing was causing the high CPU usage. Lunarpages doesn’t offer great tools for debugging and the update time is slow – as much as a day before I can see if what I did had any effect.

It was time for something drastic.

I moved Gizmo Blog off Lunarpages and onto DigitalOcean, a scalable VPS that is probably more appropriate for a site that continues to grow in traffic every month.

digitalocean

I’ve been with DigitalOcean less than a week and while the process of moving the site over and locking it down security-wise took some time, the end result seems to be an immediate reduction in traffic to my stressed-out primary host. You can see the drop in traffic on December 16th- that’s the day Gizmo Blog left Lunarpages for greener pastures.

gizmo_blog_moved

It was probably inevitable, because this is how Gizmo Blog’s traffic is trending:

gizmo_blog_traffic

To Lunarpages‘s credit, they did not kick me out or shut down my blogs in response to my unintentionally high CPU use. In researching this CPU problem over the past week, I’ve found many bloggers who did get shut down by their hosts for this sort of problem.

To address my WordPress CPU usage problems, I used a combination of:

  • W3 Total Cache and WP Super Cache (I think I like Super Cache better so far)
  • BJ Lazy Load so image-heavy posts load images on an as-needed basis
  • WP smush.it for negligible image optimization (I already use save for web on all images, your mileage may vary)
  • Simple Firewall for better login security and other firewall features
  • WP Clone for moving Gizmo Blog to a new host with minimal pain
  • WP Optimize for clearing unused revisions out of the database
  • Disable Comments an emergency measure I deployed to help reduce the load on the server
  • WP Maintenance Mode another emergency measure I used to shut down Disney Ride Blog while I experiencing CPU overages (unprofitable sites are the first to go in times of need :P)
  • GTMetrix and its WordPress plugin for monitoring site load time and performance
  • CloudFlare free version on my highest traffic sites (how CloudFlare works)

It’s too soon to say if this is the end of the CPU overages saga, and I suspect it’s not. I’m currently using three hosts for my sites (Lunarpages, BlueHost, and DigitalOcean), all of which have been satisfactory in terms of what I expect from them (in other words, I don’t expect world class speed and performance out of a host charging me $5/month for shared hosting). Craft Blog will most likely be moving onto its own VPS in the near future and I may try someone other than DigitalOcean and see how they compare.

What’s next?

I don’t think I’ll start any new blogs this year. Maintaining individual blogs has become time consuming. Every update to WordPress has to be deployed individually to each site, and problems like CPU overages and hacker attacks often have to be debugged on all sites.

As for the ones I have, I plan to add content, continue to optimize the sites, move them to more suitable hosts, and just keep ’em growing. I’m eager to see where earnings fall to in the first months of the year, and whether traffic can continue to grow on its own during times where I don’t add content regularly.

If you (TILCode reader!) have enjoyed this break from coding talk, let me know in the comments and I’ll share blog updates more regularly. I’m not sure the web needs yet another site on how to blog for bucks, but I could be wrong. :) Hope you enjoyed this massive first installment!

Remove Unwanted Characters from WordPress Post Content after WordPress MySQL Migration

Today I learned… how to remove unwanted characters like â€ and Á from WordPress post content using a MySQL query.mysql_logo

Backstory: I recently migrated a blog from shared hosting to a VPS. This experience alone could fuel TILCode content into 2024, but this particular “TIL” article is about a curious character encoding artifact that occurred as I exported the WordPress MySQL data and imported it elsewhere: Á and â€ symbols everywhere!

Removing these unwanted symbols from my WordPress posts required running a few queries on my MySQL databases. Here are the steps I used.

(And if there’s a better/easier way to do this, please let me know in the comments. This was my first foray into MySQL queries, but it got the job done.)

Step 1: Log into PhpMyAdmin

Access phpmyadmin through your cPanel or yourdomain.com/phpmyadmin – access varies by host and setup

Step 2: Navigate to the SQL tab and change collation

Click on the “PhpMyAdmin” logo so that you’re at the “root” level (and not inside a database – this might work from inside a database, but I did it from outside).

Click on the SQL tab.

mysql_button

Here, you’ll get a large window in which you can type queries – operations you perform on your data. 

mysql_queries

The first query to run is one that will set the character encoding type. 

ALTER DATABASE your_db_name_here DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;

I learned this from JesseWeb’s helpful tutorial on resolving WordPress character issues, but his code didn’t work for me as-is (I suppose that’s not surprising considering his guide is nearly 5 years old). I had to remove the quotes from the database name.

Your query will look like the image below. Be sure to replace your_db_name_here with your database’s actual name (look in the left column for your db names).

Click the “Go” button on the right to run the query.

mysql_queries_in_place

Step 3: Remove unwanted characters from existing posts

For this query, you need to navigate to the actual database itself. Click on its name on the left.

navigate_to_mysql_database

Click on the SQL tab. Now the helper text says that queries are run on the database you selected.

queries_inside_db

Enter this query string to replace all instances of â€ with an empty space:

UPDATE wp_posts SET post_content = REPLACE(post_content, '“', '');

Repeat the query for every unwanted character. Note also that you can stack ’em up in the query window like so and run ’em all at once:

UPDATE wp_posts SET post_content = REPLACE(post_content, '“', '');
UPDATE wp_posts SET post_content = REPLACE(post_content, 'Á', '');
UPDATE wp_posts SET post_content = REPLACE(post_content, 'foo', 'bar');

You may also need to clean up comments. You can do that with:

UPDATE wp_comments SET comment_content = REPLACE(comment_content, 'Á', '');

See shadez’s question on WordPress.org for more examples – his example was instrumental in helping me understand this issue.

And with that, all the unwanted characters are gone. Phew!

Two ways to inject a service into a Mocha Chai test


Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /Users/mandi/Local Sites/today-i-learned-in-code/app/public/wp-content/plugins/wp-syntax/wp-syntax.php on line 380

Today I learned… there are [at least] two ways to inject a service into Mocha unit tests using the Chai assertion library and Angular mocks. This is just a little thing, but I’ve seen this difference in a few unit testing tutorials and it confused me the first time I came across it.

In my project I have a service called mealsServer. No need to worry about what it does, for now we’re just testing that it gets injected successfully (in other words, exists).

Service Injection Technique #1:

Here I am declaring mealsServer as a variable and then injecting _mealsServer_ using beforeEach:

var mealsServer;

beforeEach(inject(function(_mealsServer_) {
    mealsServer = _mealsServer_;
}));

The underscores are an oddity. The underscores are a little syntax trick that make it possible to use the same name for the injection as we use for the variable. In other words, if we didn’t inject _mealsServer_ wrapped in underscores, then var mealsServer would need a different name. I’m all for keeping names consistent whenever possible, so I’m glad I learned about this.

Service Injection Technique #2:

And here’s an alternative: here I am injecting the mealsServer service as part of the it block:

it('should have a working meals-server service', inject(function(mealsServer) {
 expect(mealsServer).to.exist;
 }));

I’m still learning the ropes of unit testing, so I’m sure there are advantages/disadvantages to each of these approaches. I’m relying a lot on this tutorial: Testing AngularJS Apps Using Karma to get me started.

Personally, I like injecting the service in the same line of code that relies upon it being there. I think this is neater and will hold up better as this file becomes longer.

For reference’s sake, here’s the complete meals-test.js file below. It’s small right now, but just getting to the point of having (any!) tests run successfully was a several hour endeavor. In this version, I am just testing that my services exist and I’m using technique #2 from above.

I am using Mocha as my testing framework, Chai as my assertion library, and my project (and its tests) get Browserified so the requires as there to ensure the modules can be found. I also use Karma to run the tests and PhantomJS as my headless browser.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
'use strict';
 
require('../../../app/js/app.js');
require('angular-mocks');
 
describe('Testing services', function() {
 
 beforeEach(angular.mock.module('cbmApp'));
 
 it('should pass a simple test: true = true', function() {
 expect(true).to.equal(true);
 });
 
 it('should have a working meals-server service', inject(function(mealsServer) {
 expect(mealsServer).to.exist;
 }));
 
 it('should have a working user-factory service', inject(function(userFactory) {
 expect(userFactory).to.exist;
 }));
 
 it('should have a working file-reader service', inject(function(fileReader) {
 expect(fileReader).to.exist;
 }));
 
});

Whew! Now that that works, it’s onwards to writing more thorough unit tests!

Gulp – Build Automation for Javascript

Tired of restarting your server manually, over and over again, whenever you change something? Straining under the labor of recompiling your Less / Sass dozens of times an hour? No more. Let a robot do it for you. A robot named Gulp.

Why Gulp?

First off, I should mention that there are several build automation tools written specifically for Node.js apps. Grunt is probably the most popular, but there’s also Jake and a couple others. I started with Grunt, but I’m liking Gulp more and more as I use it. It’s faster, more flexible, and just “feels” more intuitive and straightforward.

basedHonestly, though, which one you use isn’t important. All that’s important is that you use SOME KIND of build automator / task runner. Over the long run, it will save you hours of repetitive, frustrating, mindless drudgery. Sound good? Read on.

Installing Gulp

To use Gulp in your app, you must install it globally onto your system, as well as locally in your project directory. Use the following commands in terminal, while inside your project dir:

npm install --global gulp
npm install --save-dev gulp
touch gulpfile.js

Your First Gulp Task

Open the gulpfile.js you just made in a text editor and put in the following:
var gulp = require('gulp');
gulp.task('default', function() {
  console.log('If you can read this, gulp is working!');
});

Go back into your terminal, type gulp, and press Enter. You should see the following output:

[20:33:59] Using gulpfile ~/YourProjectFolderHere/gulpfile.js
[20:33:59] Starting 'default'...
If you can read this, gulp is working!
[20:33:59] Finished 'default' after 74 μs

So what happened here? We created a task called default in the gulpfile, which calls a function when the task is run. That function then performs a console log. Using the gulp command with nothing after it will run any task named default. It’s a good idea to always have a default task that does the important work of building and running your project. That way, anybody else who has to work with your project can just type gulp in the terminal and see it run without having to paw through your code.

Your First Useful Gulp Task

So that was fun, but not particularly worthwhile. Let’s do something useful with Gulp!

Let’s say you have a directory full of JavaScript files, a directory called ‘js’. All these files need to be copied over to a directory called ‘build’ before you can publish your app. No problem! Put this into your gulpfile:

var gulp = require('gulp');

gulp.task('copy-js', function () {
  gulp.src(['js/**/*.js'])
    .pipe(gulp.dest('build'));
});

gulp.task('default', ['copy-js']);

There’s a lot going on here, so I’ll explain bit-by-bit:

  • We created a new task called copy-js, which will do all our copying for us.
  • The first line inside that task, beginning with gulp.src, tells gulp where to look for the files we want to copy. That bunch of /s and *s we gave it is a pattern-matching string called a glob. Here’s how to interpret this glob:
    • The js/ part tells gulp to look inside the directory named ‘js’.
    • The **/ part tells gulp to look inside any subdirectories within the ‘js’ directory.
    • The *.js part tells gulp to find all files that end with the .js file extension.
  • On the next line, we chain a method onto the end of gulp.src… specifically, the .pipe() method. .pipe() takes the output of the previous method (i.e., the .js files we found) and lets us use it as input for another method, just like a unix pipe. This is extremely useful, as you’ll soon see.
  • .pipe() passes the files we found to gulp.dest(‘build’)gulp.dest() is used to save files to a particular location. Which location? Why, the one we told it: the ‘build’ directory.
  • Finally (and importantly!) we changed the default task. Instead of executing a function, default will now execute a list of sub-tasks. For now, we just want it to execute our copy-js task.

Now, if you type gulp into the terminal, any JavaScript files in the ‘js’ directory will be copied into the ‘build’ directory. Gulp will even create a ‘build’ directory for you if it can’t find one. How thoughtful!

Watch This

“This is all well and good,” you might be thinking, “but how does this actually save me time?” After all, you still have to keep typing gulp into the terminal every time you want this copy and paste to happen, right?

The answer is no, you don’t. Gulp can run tasks for you, automatically. Enter gulp.watch():

var gulp = require('gulp');

var jsDir = 'js/**/*.js';

gulp.task('copy-js', function () {
  gulp.src([jsDir])
    .pipe(gulp.dest('build'));
});

gulp.task('watch-js', function () {
  gulp.watch(jsDir, ['copy-js'])
    .on('change', function (event) {
      console.log('File ' + event.path + ' was ' + event.type);
    });
});

gulp.task('default', ['watch-js']);

Ok, so what happened here?

  • We made a new task called watch-js. When this task is executed, gulp.watch() will keep a close eye on the directory we tell it, watching for files inside to change. When they do, the tasks in the array we provide will be executed… in this case, the copy-js task.
    • To put it simply, whenever we change a .js file, it’ll be copied over automatically. How cool is that?
  • We chained .on() to the end of gulp.watch(). This lets us execute code when certain conditions are met. In this case, when a file changes, we execute a function. This function uses the event parameter to let us console log which file changed, and how it was changed (added, changed, deleted, etc.)
  • Also, we put the JavaScript directory glob into a separate var called jsDir, which we use in both the copy-js and watch-js task. That way, we can make sure it stays consistent.
  •  Finally, we change the default task to execute watch-js when it’s called. By the way, you’ll notice this is an array; we can comma-separate multiple sub-task names to be called here, if we choose.

Sweet! What Else?

Gulp can help you automate all kinds of development-related tasks, including but not limited to:

  • Linting
  • Unit / Integration Testing
  • Bundling / Concatenation
  • Minifying / Compression
  • CSS pre-processor compilation (i.e. Sass / Less)
  • Image resizing / processing
  • Asset versioning
  • Running shell commands

To learn more, check out Gulp’s documentation and browse their extensive, searchable list of plugins. To use a plugin, npm install it, require it at the top of your gulpfile as a variable, and then use it based on the plugin’s documentation. Like the following example does with gulp-sass:

var gulp = require('gulp');
var sass = require('gulp-sass');

gulp.task('default', function() {
  gulp.src('sass/*.scss')
    .pipe(sass())
    .pipe(gulp.dest('css'));
});

That should be enough to get you started. Happy gulping!

AngularJS Infinite List – How to create a list that automatically adds a blank textarea as the user adds new data

This tutorial is about a neat trick you can use with ng-repeat and inputs using AngularJS. This is just one tiny part of a larger AngularJS project of mine you can explore here: Chicken Breast Meals on GitHub.

Let’s say you are building a user input form that lets the user input series of items in a list, such as ingredients in a recipe. You could have the user click a link to add a new input field before typing in each ingredient, but that’s an extra (and annoying) step nowadays for users.

What you really want is a list of inputs that grows itself, offering a new blank input in response to each addition the user makes:

dynamic_list_animation
Infinitely-expanding list grows as the user adds to it

PLUNKER DEMO

The rest of this tutorial uses the Chicken Breast Meals project code to explain how this feature was made.

Part 1: In the view (.html file)

The html code (and Angular directives) that create the ingredients list above is in app/views/admin/admin-edit-meal-view.html

<h2>Ingredients</h2>
  <ol class="ingredients-list">
  <!-- loop through and display existing ingredients -->
    <li data-ng-repeat="ingredient in formMeal.ingredients track by $index">
    <textarea name="ingredientLines" 
              type="text"
              data-ng-model="formMeal.ingredients[$index].name"
              placeholder="Add ingredient"
              data-ng-change="changeIngredient($index)">
    </textarea>
 <!-- trash can button -->
 <a href="" data-ng-show="ingredient" 
            data-ng-click="formMeal.ingredients.splice($index,1)">
 <img src="/assets/delete.png"/></a>
 </li>
 </ol>

When the user selects a recipe to edit in the admin page, that selected recipe is represented by an object called formMeal. Inside formMeal are properties like:

  • name (which is saved as a String)
  • yield (saved as a Number)
  • cookTime (another Number)
  • ingredients (an Array of Objects)

On the <li>

The ng-repeat directive builds the list of ingredients by creating a <li> and a <textarea> for each ingredient already found in the saved recipe data.  Each ingredient has an index in the ingredients array, so we grab its name out of the array of ingredient objects like so:

formMeal.ingredients[$index].name

Immediately following the ng-repeat directive is $track by index. This bit of code is easy to overlook but it’s very important: it’s what keeps the user’s current textarea in focus while the user edits it. Without $track by index, the app kicks the user out of that text box after the first typed letter. (Ask me how much fun I had debugging this lose-focus problem…)

In the <textarea>

Each ingredient is represented by a <textarea>, and each one has its own ng-model directive pairing it with that particular index in the array.

data-ng-model="formMeal.ingredients[$index].name"

This lets us edit an existing ingredient anywhere in the list by that ingredient’s index. Since ingredients is an array, we need to pass it the index of the ingredient we’re editing via the <textarea>. (You can read more about ng-repeat and $index here in the Angular documentation.) This placeholder part is straightforward:

placeholder="Add ingredient"

This is what puts the default text into each <textarea> when the user hasn’t entered anything yet. It’s just a nice UX touch.

Finally, we have an ng-change directive. You can read more about ng-change here, basically all it does is call the method (or do the thing) you tell it to do any time there’s a change in the <textarea> it’s associated with.

data-ng-change="changeIngredient($index)"

A change to the <textarea> (ie: user typing) causes the method changeIngredient() to run with each change.

Wait, where’s changeIngredient()? It’s over in app/js/controllers/cbm-admin-controller.js, which we will look at next.

Part 2: In the controller (.js file)

Now we’re inside app/js/controllers/cbm-admin-controller.js looking at the changeIngredient() method.

We already saw that whenever the user updates text inside one of those <textarea> regions, this method gets called. (If you were to put a console log inside changeIngredient(), you would see it called every time you typed a letter into the textarea.)

$scope.changeIngredient = function(index) {
   if (index == $scope.formMeal.ingredients.length -1){
     $scope.formMeal.ingredients.push('');
   }
};

changeIngredient(index) checks the index that’s been passed in:

  • if that index is at the end of the array (ie: its index number is one less than the array’s length), then we are editing the last ingredient in the list and we need to push an empty ingredient (”) to the ingredients array to make the empty box appear at the end
  • if that index is not at the end of the array, we just update whatever’s at this index since it’s an ingredient that already exists. This is why you don’t see an empty box get added to the end of the list if you’re editing a field that’s not at the end.

It’s important to observe that this method works by checking that the user is editing the last index (which is always the empty <textarea>). This is how we  don’t spawn new, empty textareas for editing earlier ingredients in the list.

What updates the ingredients list automatically? That’s Angular’s two-way data binding at work. Any time you update a model the change happens in real time.  If you’re new to Angular, here’s a Plunker demonstrating a very simple implementation of Angular’s two-way data binding.

Part 3: Offering an empty field by default

When you initialize your data or your app, you’ll need to include something like:

$scope.formMeal.ingredients = [''];

or

$scope.ingredients.push('');

so that the ingredients list has an empty one in it by default. Your implementation needs will vary, of course, but hopefully this little guide gave you enough of a start to build this “infinity list” into your own AngularJS form!

Don’t miss the Plunker demo of a simplified version of this feature that you can play with and adapt to your own project.

Deploying a MEAN stack app to Heroku


Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /Users/mandi/Local Sites/today-i-learned-in-code/app/public/wp-content/plugins/wp-syntax/wp-syntax.php on line 380

The time had come at last to deploy Chicken Breast Meals to an external  server so that it could be enjoyed by a larger audience. I chose Heroku because it’s a friendly “my-first-deployment” kind of technology that handles a lot of the nitty-gritty details that Amazon Web Services and others leave up to you. For a simple MEAN stack app deployment, Heroku has been sufficient for my needs so far.

heroku

However, despite Heroku’s fairly straightforwardness, I still encountered a number of problems along the way. This post is about all the steps I took to get my MEAN app from GitHub to Heroku.

For clarity’s sake, these are the technologies I used in the project and in my development environment:

  • AngularJS
  • MongoDB
  • Express server
  • node.js (and a bunch of npm packages)
  • Heroku
  • MongoLab on Heroku
  • GitHub
  • Windows 7 with msysgit bash (my environment)

And unlike Heroku’s tutorial, this tutorial assumes you already have a git repo on your hard drive and it’s already full of your project files.

Step 1: Open a Heroku Account and Add a New App to your Dashboard

Hopefully, Heroku’s site can walk you through this sufficiently well.

Once you have an account, add a new app via the dashboard. On the current version of the Heroku dashboard, adding a new app is done with the + button.

app-plus
Heroku’s “add new app” button is easy to miss.

Step 2: Get the Heroku Toolbelt

Heroku’s own site will tell you to do this, too. Go to https://toolbelt.heroku.com/ and install the toolbelt appropriate to your environment. The toolbelt allows you to use the heroku command from your shell.

Step 3: Enter your credentials

Heroku’s toolbelt site walks you through these steps, too, but just in case you’re following along here:

$ heroku login
Enter your Heroku credentials.
Email: myaddress@gmail.com
Password (typing will be hidden)
Authentication successful.

You may get a response like this:

Your Heroku account does not have a public ssh key uploaded.
Could not find an existing public key at ~/.ssh/id_rsa.pub
Would you like to generate one? [Yn] Y
Generating new SSH public key.
Uploading SSH public key /home/jim/.ssh/id_rsa.pub... done

If this happens, choose Y and continue.

Since you already made a new Heroku app in step 1 you should skip the “heroku create” step.

Step 4: Add your Heroku app as a remote to your existing git clone’d repo

If you’re like me and you already have your git repo as a folder on your hard drive, you don’t need to make a new repo, you just need to add Heroku as a remote for it.

Navigate to your app’s root folder with cd and then use heroku git:remote -a yourappnamehere to add your remote.

If you follow these steps on Heroku’s own site, it will suggest using git init here (which you shouldn’t do since you already have a repo set up) and it will fill in your chosen app name where mine says chickenbreastmeals.

These are the steps I used to add my Heroku app as a remote to my existing GitHub repo:

$ cd /your/project/location
$ heroku git:remote -a chickenbreastmeals

Step 5: Attempt to push to Heroku – Permission Denied!

Pushing your repo to Heroku is done with just one line:

$ git push heroku master

…But if you’re like I was originally, you’ll get a permission denied (publickey) error.

(If you don’t get this error, hooray – you’re probably good to go. Or you’re stuck on a new problem that I didn’t encounter. Good luck.)

$ git push heroku master
Permission denied (publickey).
fatal: Could not read from remote repository.

Oh, snap. I Googled the “git push heroku master permission denied (publickey)” error and landed on this helpful Stack Overflow question. The first reply suggested a series of steps starting with heroku keys: add ~/.ssh/id_rsa.pub

heroku keys:add ~/.ssh/id_rsa.pub // or just heroku keys:add and it will prompt you to pick one of your keys

Alas, in my case, this didn’t work. Here’s what I got:

Uploading SSH public key c:/Users/Mandi/.ssh/id_rsa.pub... failed! Could not upload SSH public key: key file 'c:/Users/Mandi/.ssh/id_rsa.pub' does not exist

Well, that’s just super: I didn’t have an id_rsa.pub file yet. I needed to generate a new set of SSH keys, as detailed in my next step.

Step 6: Generate SSH keys

Fortunately, GitHub has an excellent guide on generating ssh keys, which will get you most of the way there. I encountered some problems along the way, which I’ve explained in this section.

The first step in GitHub’s instructions failed for me, of course, since I had no SSH keys.

All I got was:

ls -al ~/.ssh
total 7
drwxr-xr-x 1 Mandi Administ 0 Nov 10 16:04 .
drwxr-xr-x 48 Mandi Administ 12288 Nov 10 16:04 ..
-rw-r--r-- 1 Mandi Administ 405 Nov 10 16:04 known_hosts

If you also have no SSH keys (files with names like id_dsa.pub, id_ecdsa.pub, id_rsa.pub, etc) you’ll need to move right along to GitHub’s second step and generate a new SSH key:

ssh-keygen -t rsa -C "your_email@example.com"
# Creates a new ssh key, using the provided email as a label
# Generating public/private rsa key pair.
# Enter file in which to save the key (/c/Users/you/.ssh/id_rsa): [Press enter]

Just press enter when it prompts for a file location – you want the default. You’ll enter a passphrase twice (remember what you type here!):

Enter passphrase (empty for no passphrase): [Type a passphrase]
# Enter same passphrase again: [Type passphrase again]

And then you’ll get something like this, telling you where your identification and public key were saved as well as your key fingerprint and a random ascii art image for your viewing pleasure.

Your identification has been saved in /c/Users/you/.ssh/id_rsa.
# Your public key has been saved in /c/Users/you/.ssh/id_rsa.pub.
# The key fingerprint is:
# 01:0f:f4:3b:ca:85:d6:17:a1:7d:f0:68:9d:f0:a2:db your_email@example.com

Then you start the SSH agent:

$ ssh-agent -s
SSH_AUTH_SOCK=/tmp/ssh-ALBBCxgfEl11/agent.7104; export SSH_AUTH_SOCK;
SSH_AGENT_PID=6672; export SSH_AGENT_PID;
echo Agent pid 6672;

And then you add your key to the SSH agent… and, if you’re like me, get a new failure message to investigate:

$ ssh-add ~/.ssh/id_rsa
Could not open a connection to your authentication agent.

Looks like my authentication agent was never started. Huh? The previous step failed silently, apparently.

Fortunately, there is another great Stack Overflow question about this “could not open a connection to your authentication agent” issue. However, the first couple of answers didn’t actually work for me!

This is the one that did:

eval $(ssh-agent)

Caveat: I’m on Windows 7 64-bit using msysgit bash, so your experience may differ from mine. Responses to this answer suggest the problem is not unique to the Windows environment.

Anyway, now that the authentication agent is running I can properly complete the ssh-add step:

$ ssh-add ~/.ssh/id_rsa
Enter passphrase for /c/Users/Mandi/.ssh/id_rsa:
Identity added: /c/Users/Mandi/.ssh/id_rsa (/c/Users/Mandi/.ssh/id_rsa)

Phew! Onwards to the GitHub step.

Step 7: Add new key to GitHub account

Following GitHub guide to generating SSH keys still, the next step is to copy the contents of your id_rsa.pub file to your clipboard. This is easily done with clip, like so:

clip < ~/.ssh/id_rsa.pub
  1. Go to GitHub and click the “Settings” gear icon in the upper right.
  2. Click “Add SSH Key”
  3. Give your key a title (I named mine after my computer)
  4. Paste the contents of clipboard into the large field
  5. Click “Add Key” to save it

(GitHub has a visual guide to these steps here)

Step 8: SSH into GitHub

Okay, almost done. The next step (taken from GitHub’s own guide) is:

ssh -T git@github.com

You’ll get these warnings, but that’s okay:

The authenticity of host 'github.com (207.97.227.239)' can't be established.
# RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
# Are you sure you want to continue connecting (yes/no)?

Type “yes” and if everything goes okay, you’ll get:

Hi username! You've successfully authenticated, but GitHub does not provide shell access.

This step worked successfully for me, but if you get an error here GitHub has a guide for that too: Error: Permission Denied (publickey)

Step 9: Push to Heroku

Oh, yeah – I just remembered what I was trying to do before I went down the SSH error rabbithole: I was trying to push my GitHub repo to Heroku!

First, add that same .ssh file to with heroku:keys add

$ heroku keys:add ~/.ssh/id_rsa.pub
Uploading SSH public key c:/Users/Mandi/.ssh/id_rsa.pub... done

Phew, success! Now I was able to run heroku push.

$ git push heroku master
Warning: Permanently added the RSA host key for IP address '50.19.85.132' to the
 list of known hosts.
Initializing repository, done.
Counting objects: 801, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (705/705), done.
Writing objects: 100% (801/801), 994.30 KiB | 519.00 KiB/s, done.
Total 801 (delta 419), reused 0 (delta 0)

This message was followed by several screens depicting the installation of node and my project’s node packages. Heroku handles this setup automatically, and in my case, the installation processes went off without a hitch.

Step 10: Check out your app on Heroku – Application Error, hooray!

I’m glad I didn’t celebrate too early, because my Heroku app looks like this:

heroku-application-error

Application Error

An error occurred in the application and your page could not be served. Please try again in a few moments.

If you are the application owner, check your logs for details.

And no, it doesn’t go away in a few moments.

Step 11: Using MongoDB? Install MongoLab on your Heroku app

If you’ve ever tried to run your app locally while forgetting to fire up your MongoDB first, then you’ve probably seen your build process fail due to your database not being up and running.

There’s really no way to know that a not-running database is the cause of the application error screen, but I’ll spoil the surprise for you and tell you that in this case, that’s exactly what it was. If your Heroku-hosted MEAN app is using a MongoDB then you need to install an add-on called MongoLab.

Go to your app’s dashboard and click Get more addons

get-more-addons
If your Heroku-hosted MEAN stack app requires MongoDB, add MongoLab as a free add-on.

The addons page looks different every time I come in here, but the MongoLab icon hasn’t changed:

mongolab

Click the icon to learn more about MongoLab, including its pricing structure and features. You will have to enter a credit card number to enable MongoLabs, but sandbox (which is what you’re using here) will be free. (I think this is super annoying, BTW. If it’s free, it shouldn’t require a credit card to use. I’ve never actually been charged by Heroku or MongoLab.)

To install, head back over to your Command Line/Terminal window and enter:

$ heroku addons:add mongolab

You’ll get this sort of response:

Adding mongolab on chickenbreastmeals... done, v4 (free)
Welcome to MongoLab. Your new subscription is being created and will be available shortly. Please consult the MongoLab Add-on Admin UI to check on its progress.
Use `heroku addons:docs mongolab` to view documentation.

IMPORTANT SIDE NOTE: My server.js file is already configured to expect MONGOLAB_URI. I’ve provided my server.js code here in case you need to do the same to your server file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
'use strict';
 
var express = require('express');
var bodyparser = require('body-parser');
var mongoose = require('mongoose');
var http = require('http');
var app = express();
 
mongoose.connect(process.env.MONGOLAB_URI || 'mongodb://localhost/meals-development');
app.use(express.static(__dirname + '/build'));
 
app.use(bodyparser.json({limit:'50mb'}));
app.use(bodyparser.urlencoded({limit: '50mb', extended: true}));
 
require('./routes/admin-routes')(app);
 
var server = http.createServer(app);
 
var port = process.env.PORT || 3000;
app.listen(port, function() {
console.log("Listening on " + port);
});

From here, I attempted to view my app again. This time I got:

cannot-get

Le sigh. But this is progress – I don’t get an Application Error anymore, so the database installation made a difference. Checking the Chrome console, my Heroku app is generating this error:

Failed to load resource: the server responded with a status of 404 (Not Found)

Step 12: Giving Heroku access to my Build folder

I scratched my head a bit over this “cannot GET/” problem and Googled it, which led me to this Stack Overflow question, Heroku Cannot Get.

Just like the original asker, my .gitignore contained a line for my build folder, which meant Heroku had nothing to serve as it had no access to my “compiled” project.

I removed the build line from .gitignore, and pushed the updated .gitignore file and build/ folder to both GitHub and Heroku like so:

$ git push origin master
$ git push heroku master

Step 13: IT’S ALIVE!

At last, I see my app when I visit chickenbreastmeals.com. It’s lacking the database entries from my local development environment, so I’ll update this post once I get those in.

Hope this guide helped you deploy your MongoDB / AngularJS / Express / node.js app to Heroku! There’s only about a thousand things that can go wrong between point A and point Z, so if something in this guide doesn’t work from you it’s probably a difference in our environments or an error on my part – please leave a comment letting me know (and start Googling – good luck!).

Addendum

Did you use Gulp to build your app and automate some build processes? If so, your app probably doesn’t look so hot on Heroku right now. This is because Heroku doesn’t know which of your Gulp tasks needs to run after all your Node packages are installed. Let’s fix that!

Dev Dependencies

First off, it’s important to mention that if you installed any packages as a dev dependency (like you probably did with Gulp), Heroku will not include them in your build by default. This is because Heroku assumes you’re deploying a production build, and will run npm install –production, which ignores dev dependencies. There’s two ways to fix this:

1. In your app’s package.json, move Gulp and all related packages from the “devDependencies” list into the “dependencies” list. This is a pain and I do not recommend it.

2. Run the following terminal command to tell Heroku that it should use the standard npm install command:

heroku config:set NPM_CONFIG_PRODUCTION=false

Postinstall Scripts

With that taken care of, we need to tell Heroku what commands we want to run after all of our packages are downloaded and installed. Luckily Heroku has made this easy! Just add a “scripts” block to your package.json file, like so:

"scripts": {
 "start": "node server.js"
 "postinstall": "bower install && gulp build-libs && gulp build"
}

The “start” script tells Heroku how to start my server: run node with the file server.js. The “postinstall” script is actually three commands separated by &&, ran in sequence: bower installgulp build-libs, and gulp build. In my gulpfile.js, the build-libs task concatenates and minifies several libraries like Angular and Bootstrap. This task relies on those libraries being in the bower_components folder, which is why I run bower install first.

Troubleshooting

If any of the steps in this article don’t work, there’s a couple things you can try. The most helpful thing to know is that you can run common Linux shell commands on your Heroku container with heroku run. Like this:

heroku run ls -la

This is just like running ls -la on your own system, and will list all of the files in your Heroku deployment’s main directory. This is how I figured out that I need to run bower install: there was no bower_components folder in my deployment!

 

MongoDB cheat sheet

mongodbThis simple MongoDB tutorial is for you if:

  • you’re completely new to MongoDB and just want to do SOMETHING with it
  • you have MongoDB running but forgot the particulars of using the MongoDB shell
  • you want to look inside your db and confirm data’s actually getting written to it
  • you rebooted and lost your Mongo server and you can’t remember how you got it running in the first place
  • you don’t want to wade through the documentation again

Or, you’re just me in the future looking for where I wrote this down. You’re welcome, future self.

1. Install Mongo!

Install steps are better covered by Mongo itself: official MongoDB

2. Start Mongod

These steps differ by OS.

Mac / Linux

On my Mac machine, I can start mongod from anywhere because it’s in my $path (see this guide for steps on adding MongoDB to your $PATH).

Just use:

mongod

Successful connection looks something like:
Screen Shot 2015-04-26 at 11.59.03 AM

Windows

On my Windows machine, I have to navigate to Mongo’s installation folder to start mongod. Open Terminal (Mac) or Command Prompt (Windows). Navigate all the way into the bin folder. On my Windows machine, my mongo folder is here:

J:\mongo\mongodb\bin

Now use:

mongod

On Windows, I see a connection spam scroll by. Leave this window open and go to the next step.

mongod_started

Problems starting Mongodb?

If you get the “Unable to lock file: data/db/mongod.lock. Is a mongod instance already running?” problem, you probably have multiple instances of mongodb already running. This can happen as you switch projects, switch between user accounts on the same machine, etc.

To fix it, do this to list your computer’s processes and filter them to just mongo (this example is from when I had the problem on my Mac):

ps aux | grep mongo

On my machine, running that command revealed a couple instances of mongo already running (these were started by Jim using a separate account on the same computer). The third process in the list (the one owned by mjgrant) is the grep itself.

Screen Shot 2015-04-26 at 11.24.10 AM

Because my mongo instance was started by “root” (another Mac account, really), I had to be all dirty and use sudo to kill it by its process number (second column from the left).

sudo kill 61180

If you run the ps aux command again, you should see that there are now no instances of mongo running. If there are, just kill them using the same steps.

But what’s this? Trying to start mongo gives me this error now:

2015-04-26T11:30:11.114-0700 [initandlisten] couldn't open /data/db/memry_database.ns errno:13 Permission denied
2015-04-26T11:30:11.114-0700 [initandlisten] error couldn't open file /data/db/memry_database.ns terminating
2015-04-26T11:30:11.114-0700 [initandlisten] dbexit:

Rather annoyingly in our shared-computer situation, mongo’s knowledge of databases transcends user accounts. Navigating up to /data/db I can see all the databases on this computer. cbm_database is the one I’m trying to use, but mongo is choking on trying to access Jim’s memry_database.

Screen Shot 2015-04-26 at 11.35.10 AM

I check their permissions…

ls -la

Screen Shot 2015-04-26 at 11.37.15 AM

When asked why his databases belong to “root”, Jim says, “I probably did it wrong” :D Alas, we don’t know how we ended up with databases belonging to “root”, but Jim must have been using mongo as a root user, hence why he didn’t run into problems accessing databases owned by mjgrant.

Anyway… I used chown to assign ownership of these rogue root databases to my own account to unblock my work. (Standard disclaimer applies: use sudo with caution.)

sudo chown mjgrant memry_database.ns
sudo chown mjgrant memry_database.0

I run ls -la again and confirm that now I own all of the databases.
sudochown_sailormoon

Now you should be able to start MongoDB with…

mongod

And now you should see the connection data:

Screen Shot 2015-04-26 at 11.59.03 AM

3. Start the Mongo Shell

Open a new window (and navigate again to the bin folder if you’re on Windows).

mongo

This line starts up the Mongo shell.

(So to recap, mongod has to happen before mongo.)

On Mac:

Screen Shot 2015-04-26 at 12.09.22 PM

On Windows: 

mongo_shell

MongoDB shell version: x.x.x
connecting to: test

You can now start your localhost server.  (If you were blocked by Error: failed to connect to [localhost:27017] that should now be resolved.)

From here on out, commands you type into the command line will be mongo-specific.

3. Viewing your MongoDBs

Let’s say you want to see your databases:

show dbs

show dbs delivers a list of your databases. You should see something like this in your terminal window after you type it:

mongodb_see_dbs

On mine, the result is:

> show dbs
admin <empty>
local 0.078GB
meals-development 0.078GB

4. Using your Mongo DBs

These are your database names. Go inside them with “use”:

use meals-development

Once you’re “using” a database, though, the terminal doesn’t give much clue as to what to do next.

use_db

5. Viewing Collections

A collection is a group of MongoDB documents. Generally, they’re similar in purpose, but they don’t have to conform to one shared schema. You can see collections inside a db by typing:

show collections

As an example, inside my recipes-development example I have:

show_collections_mongo_db

meals
system.indexes

Ah hah, finally. Now I know the name of the collection that contains my recipe (meal) data.

6. Look inside a collection

We’re almost to the good part. To see inside the meals collection, type:

db.meals.find()

You should get a number of objects with ids, names, etc. Each object will start with something like: { “_id” : ObjectID<“544dabfba054…

That’s it!

This was just a short guide to my most commonly used MongoDB shell commands. When I’m setting up a new db, I use these steps to look inside my db and see if data is being saved the way I expect it to.

Helpful Links