Category Archives: Tools

Using git offline – git disconnected

Git is a distributed version control system

Imagine two colleagues, one of them has lost access to his git account (or even more, doesn’t have any Internet access), lets see how a distributed version control system can help us …

Constraints

  • Alice and Bob work at the same “Acme” company on the project “awesome-project”, they use github (it could be any other solution that supports git remote repos)
  • Alice has a working workstation (access to Internet, she can push/pull to github with her account)
  • Bob:
    • He doesn’t have access to Internet (lost it)
    • His account to push/pull to github has been disabled for some reason
    • They have a usb drive they can share

Goal

In the next sections, we’ll see how, with the help of Alice, Bob will manage to:

  • Retrieve the latest changes from github
  • Continue to work in local
  • Push his changes to github

Init

Create a bare repository

Alice will insert the usb drive in her computer. She will create a “bare” repo that will act as a remote repository (you will be able to push/pull on it).

Alice:~ $ cd /Volumes/USB_DRIVE
Alice:USB_DRIVE $ mkdir awesome-project.git && cd $_
Alice:awesome-project.git $ git --bare init
Initialized empty Git repository in /Volumes/USB_DRIVE/awesome-project.git/

Point a remote from your working repo to the bare repo

Alice will go to the working directory of “awesome-project” and point a git remote to the bare repo she just created on the usb drive.

Alice:~ $ cd awesome-project
Alice:awesome-project $ git remote add usbdrive /Volumes/USB_DRIVE/awesome-project.git/

When listing remotes, she will see a new remote like this:

Alice:awesome-project $ git remote -vv
origin https://github.com/acme/awesome-project.git (fetch)
origin https://github.com/acme/awesome-project.git (push)
usbdrive /Volumes/USB_DRIVE/awesome-project.git/ (fetch)
usbdrive /Volumes/USB_DRIVE/awesome-project.git/ (push)

Retrieving latest changes

We’ll now see how Bob will retrieve the latest changes without needing:

  • direct access to the remote repo via Internet (nor Internet access)
  • neither having his git account active

Push local repo to usb drive remote repo

Say Bob has not been able to synchronize for a few hours (or a few days), Alice who has access to the remote repo will now:

  • retrieve the latest changes of the repo from the remote origin
  • push those changes to the bare repo, on the remote usbdrive
  • then, in the next step, Bob will retrieve those changes
Alice:awesome-project $ git pull origin
Already up to date.
Alice:awesome-project $ git push usbdrive --all
Enumerating objects: 6, done.
Counting objects: 100% (6/6), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (6/6), 492 bytes | 492.00 KiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To /Volumes/USB_DRIVE/awesome-project.git/
* [new branch] develop -> develop
* [new branch] master -> master

Note: Instead of git push usbdrive --all, you can specify a specific branch to avoid pushing all your branches.

Pull from usb drive remote repo to local repo

Now, Alice can give the usb drive to Bob who will be able to pull the new changes from the remote repo on the usb drive, without needing any Internet access nor git account:

  • Bob will plug the usb drive
  • he will point his working directory to the local remote on the usb drive he just plugged, just like above
  • then pull the changes on a specific branch
Bob:~ $ cd awesome-project
Bob:awesome-project $ git remote add usbdrive /Volumes/USB_DRIVE/awesome-project.git/
Bob:awesome-project $ git pull usbdrive master
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From /Volumes/USB_DRIVE/awesome-project
* branch master -> FETCH_HEAD
* [new branch] master -> usbdrive/master
Updating 3aff4f8..741ac86
Fast-forward
README.md | 1 +
1 file changed, 1 insertion(+)

Note: You need to specify the branch you want to pull so that git will track it (see at the end about tracking branches).

Pushing latest changes

We’ll now see how Bob will push the changes he just made in his local working directory to the remote repo on Internet via Alice, without needing:

  • direct access to the remote repo via Internet (nor Internet access)
  • neither having his git account active

Bob worked on his local working directory and made a few commits on a local branch called feature/from-bob.

In order to make his work available to everyone, Bob needs to:

  • push his local branch feature/from-bob to the usb drive remote repo
  • give the usb drive to Alice
  • Alice will pull the feature/from-bob branch on usb drive remote repo to her local repo
  • She will then push the feature/from-bob branch from her local working directory to the remote repo
  • Then everybody will have access to the changes Bob made

Pushing from local working directory to usb drive remote repo

Bob pushes his local branch feature/from-bob to the usb drive remote repo.

Bob:awesome-project $ git push -u usbdrive feature/from-bob
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 8 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 264 bytes | 264.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To /Volumes/USB_DRIVE/awesome-project.git/
* [new branch] feature/from-bob -> feature/from-bob
Branch 'feature/from-bob' set up to track remote branch 'feature/from-bob' from 'usbdrive'.

Retrieving Bob’s changes from the remote repo on the usb drive

Bob gives the usb drive to Alice, she plugs it on her computer in order to pull the branch feature/from-bob from the usbdrive remote repo (on the usb drive, same as the first step when she pushed but this time, she is pulling from it to retrieve Bob’s work).

Alice:awesome-project $ git fetch usbdrive
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From /Volumes/USB_DRIVE/awesome-project
* [new branch] feature/from-bob -> usbdrive/feature/from-bob

Pushing Bob’s work to github via Alice’s workstation

Alice has retrieved Bob’s branch feature/from-bob on her local working directory, she will now push it to github (the remote origin), making it available to everyone.

Alice:awesome-project $ git checkout feature/from-bob
Branch 'feature/from-bob' set up to track remote branch 'feature/from-bob' from 'usbdrive'.
Switched to a new branch 'feature/from-bob'
Alice:awesome-project $ git push -u origin feature/from-bob
Enumerating objects: 8, done.
Counting objects: 100% (8/8), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (6/6), 487 bytes | 487.00 KiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
remote:
remote: Create a pull request for 'feature/from-bob' on GitHub by visiting:
remote: https://github.com/acme/awesome-project/pull/new/feature/from-bob
remote:
To https://github.com/acme/awesome-project.git
* [new branch] feature/from-bob -> feature/from-bob
Branch 'feature/from-bob' set up to track remote branch 'feature/from-bob' from 'origin'.

Conclusion

This might appear as a twisted use case, the goal was to make you better understand distributed version control and remote/tracking branches.

  • you create a tracking branch when you use -u which is short for --set-upstream
  • tracking branches are local branches that have a direct relationship to a remote branch
  • If you’re on a tracking branch and type git pull, Git automatically knows which server to fetch from and branch to merge into

You may be used to have only one remote (usually origin), you have seen in the example we used two. You can have multiple remotes, so you can push to multiple remote servers.

A case you will come across often is when you fork a repo, you will have:

  • origin as your main remote pointing to your fork
  • upstream (it could be named different) pointing to the original repo so that you can sync up

CircleCI – How to use the latest version of docker-compose / docker-engine ?

On my latest project topheman/docker-experiments, I’ve been using CircleCI to build docker images of my app on the CI and run unit tests through them, using the same docker-compose file configuration as I’m using on my local machine. Though, at my first build I encountered the following error:

ERROR: Version in “./docker-compose.yml” is unsupported. You might be seeing this error because you’re using the wrong Compose file version. Either specify a supported version (“2.0”, “2.1”, “3.0”, “3.1”, “3.2”) and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.

For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/

This means that the docker-compose file format I’m using (v3.4) isn’t supported by the docker-engine version used on the default setup of CircleCIsee compose and docker matrix compatibility.

Fortunately, on CircleCI, in machine executor mode, you can change/customize the image your VM will be running (by default: circleci/classic:latest) – see the list of images available. All I had to do was switching the image name and I was able to use an other version of docker-compose / docker-engine on the CI:

version: 2
jobs:
  build:
-    machine: true
+    machine:
+      image: circleci/classic:201808-01

You can see the modifications in that commit.

More posts to come about docker, meanwhile, you can checkout topheman/docker-experiments.

Resources:

Continuous deployment with Travis CI

This post is inspired by the notes I published on my project topheman/npm-registry-browser where you can find an application of the continuous deployment I will be writing about.

Before starting

  • I’m assuming that you have a build step (like npm run build) that creates a build folder containing anything that should be deployed
  • I will assume that you already have a test suite running on Travis CI
  • I will use github pages as hosting for my production server
  • I will use surge.sh as hosting for my staging server (BONUS)
  • The API server is not part of the flow I’m describing (hosted on another server)

I’m using the github flow which is very suited for continuous deployment (this is not a requirement). Reminder:

  • Anything on master is deployable
  • Working on a feature / a fix ? Create a branch
  • Commit to that branch locally and regularly push to the same named branch on the server
  • Open a pull-request when need feedback or ready to merge
  • Once reviewed and approved, merge to master
  • Once merged to master, it should be deployable

Goal

Here is the Continuous Deployment workflow we will setup (this is a little opiniated, you might have slightly different needs, the use cases are wide enough so that you can choose to either blindly follow or adapt):

  • Only tagged commits pushed on master will deploy to production. That way, we can choose when we want to ship to production, all we’ll have to do is tag a commit and push it.
  • Each time we ship to production, we’ll also upload the generated build folder to github to make it available in the releases section
    • teams won’t need the whole toolchain to retrieve a specific build (a simple download will do the job)
    • if a weird bug is filled for a specific version, we track the exact files that were deployed on the server
  • Each push on master will deploy to staging. That way, the QA team will have access to the latest features and will be able to report bugs that we’ll fix before finally shipping to production.

Of course, these deployments, will only happen if all the tests pass first. We won’t deploy any failing build.

Setup

Create a github token

First, you will need to create a github token to:

  • deploy on github pages
  • upload your generated build files to the releases section

1) Go to https://github.com/settings/tokens and generate a token that has at least the scope public_repo.
2) Go to https://travis-ci.org/<owner>/<project>/settings (the dashboard of your project on Travis CI) and add the token as the env variable GITHUB_TOKEN.

Deploy production to github pages

In your .travis.yml file, you will add the following:

before_deploy:
  - npm run build
deploy:
  - provider: pages
    skip_cleanup: true # Don't re-run the tests
    github_token: $GITHUB_TOKEN
    keep_history: true
    local_dir: build
    on:
      tags: true
      branch: master

You can do some deployment tests on a feature branch, just temporarily update the on section.

To check if it works on the master branch, you just have to push a tag. Example:

git tag tag-test-release-production
git push --tags origin master

Once it’s done, you can clean the tag locally (and remotly) like that (if you wish):

git tag --delete tag-test-release-production
git push --delete origin tag-test-release-production

Upload artefacts to github releases

Since we can only upload one file at a time, we will start by compressing the build folder into an archive. Update the before_deploy section of your .travis.yml file:

before_deploy:
  - npm run build
  - tar czvf build.tar.gz -C build .

Then, add a new provider to the deploy section:

deploy:
  - provider: releases
    api_key: $GITHUB_TOKEN
    file: "./build.tar.gz"
    skip_cleanup: true
    on:
      tags: true
      branch: master

Deploy to staging using surge

As you already saw, you can use multiple providers, but you can also trigger them on different occasions. Say you would like to deploy on a staging server each time someone pushes on master ? Here is an example of how to do it with surge.sh (a static site hosting solution – you may use a similar one):

Add the 2 following env vars to the travis settings of your repo:

  • SURGE_LOGIN: Set it to the email address you use with Surge
  • SURGE_TOKEN: Set it to your login token (get it by running surge token)

Add the following to the deploy section of your .travis.yml (specify your domain name):

deploy:
  - provider: surge
    project: ./build/
    domain: example.surge.sh
    skip_cleanup: true

Notes

A lot of providers are supported by Travis CI, the ones exposed in this article are just examples (as is the workflow). If you are deploying to an unsupported hosting service (or you have some custom workflow), you can make your own deploy.sh script.

Deploy hooks aren’t executed on PRs. You don’t have to worry about malicious deployment or leaking tokens to untrusted forks: both encypted travis variables and environment values set via repo settings are not provided to untrusted builds, triggered by pull requests from another repository (source).

You could use the after_deploy hook to make a curl request to make sure the deployment went through (if you have some metadatas such as the git hash or the date of the generated build, you can be 100% sure it has been deployed) – see how to inject metadatas into your generated build.

I’m using everything I described above in my project topheman/npm-registry-browser.

Add metadatas to your build files

Have you ever wondered if the file you’re been served from the server is up to date ? Why would you ?…

Not so long ago, you might have been programming in php and when something went wrong, the first thing you would say was “Try to clear your browser’s cache”.

Now that our assets filenames are hashed by default by bundlers like webpack, you could think that we shouldn’t run into such problems anymore. Since the filename of js/html (or any other files) will change based on its content, it will force the browser to retrieve the freshest version.

But between you and the server, there could be proxies / caching systems (like varnish / memcached). And more recently, on the client-side, we have ServiceWorkers that let us enable caching strategies.

So, to be 100% sure of which version of the site I’m running, for a few years now, I’ve been using a little routine that adds metadatas into the main generated files, based on:

  • infos inside the package.json (name, description, version, author, license)
  • git hash
  • date of the generation of the build

Example on an index.html:

<html>...</html><!--!
 * 
 * my-react-app-starter
 * 
 * create-react-app based project with a few pre-configured / installed features such as eslint or prettier
 * 
 * @version v1.0.0 - 2018-06-14T17:28:30+02:00
 * @revision #640ba4d - https://github.com/topheman/my-react-app-starter/tree/640ba4df20e3452acd2dd10b0f746a8300af0749
 * @author Christophe Rosset <tophe@topheman.com> (http://labs.topheman.com/)
 * @copyright 2018(c) Christophe Rosset <tophe@topheman.com> (http://labs.topheman.com/)
 * @license MIT
 * 
-->

That way, you can be sure of date / version (and other infos) of the generated file you’re been served with a simple curl.

Implementation

Whether you use create-react-app or have direct access to the webpack.config, first, copy/paste this common.js file to the root of your project, then:

npm install --save-dev moment git-rev-sync

Using create-react-app

I’ll explain bellow the implementation that you can see in this diff.

1) Copy/paste the following in a bin/expand-metadatas.js file in your project:

const { getBanner, getInfos } = require("../common");

process.env.REACT_APP_METADATAS_BANNER_HTML = getBanner("formatted");

process.env.REACT_APP_METADATAS_VERSION = getInfos().pkg.version;

2) Add --require ./bin/expand-metadatas.js to your package.json, that way, react-scripts will require bin/expand-metadatas.js before running and inject REACT_APP_METADATAS_BANNER_HTML and REACT_APP_METADATAS_VERSION as env vars:

"scripts": {
  "start": "react-scripts --require ./bin/expand-metadatas.js start",
  "build": "react-scripts --require ./bin/expand-metadatas.js build",
  "test": "react-scripts --require ./bin/expand-metadatas.js test --env=jsdom",
  "eject": "react-scripts eject"
}

3) At the end of public/index.html, append:

<!--!%REACT_APP_METADATAS_BANNER_HTML%
-->

You can also access process.env.REACT_APP_METADATAS_VERSION in the code of your app (to show the version number for example).

Using the webpack.config

Like in topheman/webpack-babel-starter, you can require common.js from your webpack.config and:

– access getBanner / getBannerHtml / getInfos
– use the return values with the HtmlWebpackPlugin

Conclusion

Next time your build is deployed on a server and you’re being told that your release doesn’t work, a simple curl will settle it … The front is not the problem ! 😉

Resources:

Package a module for npm in CommonJS/ES2015/UMD with babel and rollup

About a year ago, I started the rxjs-experiments project. Aside of rxjs, it is all vanilla JS. I needed a simple frontend router with at least a deferred mounting feature (only mount a route when a promise is resolved). After some research on npm and github, I choose to write it myself.

The purpose of this article is not the router itself, but the whole workflow around it to get it to a published package that will be:

  • maintainable (you should have a linter, unit tests and a CI like in any other of your projects)
  • format unopinionated – as in whatever way you choose to consume the package – be it using:
    • webpack/browserify/rollup (or any other module bundler) in CommonJS or ES2015 module mode
    • directly in the browser (via a umd build)
  • providing some documentation and example

You can apply some of the following concepts to any regular project by the way …

Source code available at topheman/lite-router.

Getting started

git init
npm init -y

I’ll be using yarn in the examples. You can use npm as well of course.

.gitignore

Since you will be publishing your package on the npm registry and example on github pages, your workspace will contain artefacts (files that aren’t from your source code but were generated from a build task) that shouldn’t be versioned in git (to avoid noise and problems in merge conflicts).

Your .gitignore file should look like something like that:

.DS_Store
*.log
node_modules
.idea
dist
lib
es
coverage
build

That way, any generated file won’t be versioned in git – though, they will be part of your published package as we’ll see bellow.

.editorconfig

An .editorconfig is just good practice (especially when you work with a team), to enforce things such as:

  • Indent style: tabs or space
  • Encoding
  • EOL

Example of .editorconfig:

# http://editorconfig.org
root = true

[*]
# change these settings to your own preference
indent_style = space
indent_size = 2

# it's recommend to keep these unchanged
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true

[*.md]
trim_trailing_whitespace = false

[{package,bower}.json]
indent_style = space
indent_size = 2

Source code

Put your source code in the src folder.

Setup Babel

Run

yarn add babel-cli babel-core babel-preset-env cross-env --dev

The .babelrc file lets you describe how you want babel to behave:

  • The env option will let you use a specific config according to BABEL_ENV (Jest will override it using NODE_ENV)
  • The babel-preset-env is a preset that will determine which babel plugins you need, based on the options you pass
{
  "env": {
    // jest doesn't take account of BABEL_ENV, you need to set NODE_ENV - https://facebook.github.io/jest/docs/getting-started.html#using-babel
    "commonjs": {
      "presets": [
        ["env", {
          "useBuiltIns": false
        }]
      ]
    },
    "es": {
      "presets": [
        ["env", {
          "useBuiltIns": false,
          "modules": false
        }]
      ]
    }
  }
}
  • useBuiltIns means that we don’t want to ship any useless polyfills (leave that choice to the final user).
  • "modules": false means that you don’t want the modules to be transform to CommonJS (will be used when building ES2015 modules)

Setup jest, eslint and pre-commit hook

You can totally skip this section if you don’t bother about unit tests, code quality …

jest

Run

yarn add babel-jest jest --dev

Add the following to your package.json:

"scripts": {
  "jest": "cross-env NODE_ENV=commonjs ./node_modules/.bin/jest",
  "jest:watch": "npm run jest -- --watch"
},
"jest": {
  "testRegex": "(/tests/.*\\.spec.js)$"
}

jest.testRegex describes the pattern of your unit tests filenames. So create a unit test file like tests/index.spec.js:

describe('foo', () => {
  it('bar', () => {
    expect(true).toBe(true)
  })
})

Now, you can run:

  • npm run jest: one shot unit-test
  • npm run jest:watch: runs unit-tests in watch mode

How jest manages BABEL_ENV:

Jest sets the NODE_ENV to “test” if it isn’t provided and otherwise let’s you use a custom override. It doesn’t use BABEL_ENV at all.

eslint

Run

yarn add eslint eslint-plugin-import eslint-config-airbnb-base babel-eslint --dev

Create a .eslintrc file:

{
  "parser": "babel-eslint",
  "rules": {
    "max-len": 0,
    "comma-dangle": 0,
    "brace-style": [2, "stroustrup"],
    "no-console": 0,
    "padded-blocks": 0,
    "indent": [2, 2, {"SwitchCase": 1}],
    "spaced-comment": 1,
    "quotes": ["error", "single", { "allowTemplateLiterals": true }],
    "import/prefer-default-export": "off",
    "arrow-parens": 0,
    "consistent-return": 0,
    "no-useless-escape": 0,
    "no-underscore-dangle": 0
  },
  "extends": "airbnb-base",
  "env": {
    "browser": true,
    "jest": true
  }
}

What did you just install ? What does this .eslintrc file contains ?

  • the package eslint will let you lint your files
  • the package eslint-plugin-import is necessary when you lint ES2015+ (ES6+) based source code
  • the package babel-eslint will be used as a parser by eslint, because eslint itself might not support all babel features
  • the package eslint-config-airbnb-base is an extensible set of rules shared by airbnb (you could use an other preset) – those rules are overridable in the rule section.

Add the following in the scripts section of your package.json:

"lint": "./node_modules/.bin/eslint src",
"lint-fix": "./node_modules/.bin/eslint --fix src --ext .js",
"test": "npm run lint && npm run jest"

Now, running npm test will both lint your source code and run your unit tests.

Pre-commit hook

To make sure you don’t commit broken code, setup a pre-commit hook that will lint your source code and run your tests before each of your commits.

Run

yarn add pre-commit --dev

Add the following section to your package.json:

"pre-commit": [
  "test"
],

Setup build steps

We will build and distribute our package in 3 different formats, that way the end user will be able to use the one that fits the most his use / build tools.

Tools like webpack are able to use any of the 3 formats bellow, though, your end user might want to use ES2015 aware tools (like rollup) that can take advantage of there feature (such as importing only what you use in your bundle or even tree shaking)

And if your end user doesn’t want to use tools like webpack or browserify, but use AMD (or even use the package defined in global namespace), you will provide the UMD build (useful on platforms like codepen).

This is why, as a package author, it is interesting to publish in those different formats for your final users.

To install rollup and rimraf, run

yarn add rimraf rollup rollup-plugin-babel rollup-plugin-commonjs rollup-plugin-node-resolve rollup-plugin-replace rollup-plugin-uglify rollup-watch --dev

Add the following in the scripts section of your package.json:

"clean": "rimraf lib dist es",
"build": "npm run build:commonjs && npm run build:umd && npm run build:umd:min && npm run build:es",
"build:watch": "echo 'build && watch the COMMONJS version of the package - for other version, run specific tasks' && npm run build:commonjs:watch",
"build:commonjs": "cross-env BABEL_ENV=commonjs babel src --out-dir lib",
"build:commonjs:watch": "npm run build:commonjs -- --watch",
"build:es": "cross-env BABEL_ENV=es babel src --out-dir es",
"build:es:watch": "npm run build:es -- --watch",
"build:umd": "cross-env BABEL_ENV=es NODE_ENV=development node_modules/.bin/rollup src/index.js --config --sourcemap --output dist/lite-router.js",
"build:umd:watch": "npm run build:umd -- --watch",
"build:umd:min": "cross-env BABEL_ENV=es NODE_ENV=production rollup src/index.js --config --output dist/lite-router.min.js",

Each build:* task will build your package in a specific folder.
Each build:*:watch task will build your package in that specific folder in watch mode.

You can npm run the following tasks:

  • build:commonjs: will build the CommonJS version in the lib folder
  • build:es: will build the ES2015+ modules version in the es folder
  • build:umd: will build the UMD version at dist/lite-router.js (with sourcemaps)
  • build:umd:min: will build the minified UMD version at dist/lite-router.min.js

npm run clean will cleanup the directories created by those build tasks.

Setup example

I will go deeper on how to manage github pages (git orphan branch as a deployment channel) in an other post – in the mean time, you can check out the README of the lite-router project (actually, most of my projects hold the gh-pages branch in a build/dist directory).

In your development workflow, you might want to both work on your package and use it (without re-publishing a new version to npm at each change) on an other project. For that, you can use npm link.

Say you work on my-package (this the name attribute in your package.json) and you want to be able to test it directly in my-project:

cd my-package
npm link
cd ../my-project
npm link my-package

From there you will be able to import { myFeature } from 'my-package' in your project (as if you had npm installed your my-package).

Just run the correct build:*:watch task in your my-package so that the build stays up to date with the changes you might apply to its source code.

Upgrade the package.json file

All the following explanations were applied to this package.json file.You can also checkout the npm doc about the package.json.

Information related

In your package.json, make sure you have:

  • a name, a version, a description and an author section
  • a license, a homepage and a keywords section
  • a repository and bugs section

Specify endpoints

So that your build files will be part of your final published package, you will have to declare them. Add the following to your package.json:

"main": "lib/index.js",
"module": "es/index.js",
"jsnext:main": "es/index.js",
"files": [
  "dist",
  "lib",
  "es",
  "src"
],
  • The files section tells npm to package those folders when publishing (otherwise, they would be ignored, since they are listed in the .gitignore file)
  • main defines the endpoint of the CommonJS build
  • jsnext:main and module define the endpoint of the ES2015 build (we define twice the endpoint because jsnext:main was the first to be in use but it’s more likely that module will be standardized)

Add prepare script

Add the following in the scripts section of your package.json:

"prepare": "npm run clean && npm test && npm run build",

This will make sure the build files (which aren’t part of the git repo) are generated when your contributers (NOT users) run npm install after forking and cloning your repo.

More on prepare script.

Travis CI

Don’t bother about this section if you don’t use Travis CI. If you use an other CI tool, well, it’s pretty much the same.

The following .travis.yml file will test your builds, run the linter and the unit tests:

language: node_js
node_js:
  - "6"
script:
  - npm test

Note: Since I added npm test in the prepare script, the tests will run twice (this is also why there is no mention of npm run build, since it’s tested after the install). There are ways to avoid it I wont talk about it here.

Checkout an example of that kind of travis test.

Publish your package

Don’t forget to add a README.md file with:

  • Why you made the package
  • How to install it
  • A short example
  • Describe the API
  • In a subsection, explain how to contribute (your git workflow, how to install, run, test, build …)
  • Add some license

If you have your npm account setup on your computer, you are ready to publish your package. Just run:

npm run build
npm publish

Once published, you can

  • npm install it somewhere else
  • access the different builds through unpkg.com (useful platform made by Michael Jackson, one of the author of React Router) – example.

Resources / Credits

Source code available at topheman/lite-router.