How would you like to be able to run one command and have your entire development stack up and running?
And this starts my database service, runs the server, compiles the client modules, and then watches any files for changes and restarts the server if needed, reloads the page or just hot updates the styles.
And the best part is, you can just checkout the code and do this on any system.
I'm doing this with:
Runs an app in it's own space; think of it like running a VM, but in native speed.
Lots of examples, projects and videos. I recommend watching some videos on Youtube, just make sure to be recent, as it had a very rapid growth in the last year or so.
FROM ubuntu:14.04 RUN sudo apt-get install --yes curl RUN curl --silent --location https://deb.nodesource.com/setup_4.x | sudo bash - RUN sudo apt-get install --yes nodejs COPY . /nodeapp WORKDIR /nodeapp RUN npm install CMD ["npm", "start"]
Starting from a base image, install nodejs, copy the source files in the image, install dependencies and run it. On development, I use a variation on this where I mount the source code directly in the container without copying it (via volumes), so you can change the code and reload.
Webpack is a module bundler, it takes your many js files and bundles them together, like require.js, but it was much easier, faster, and comes with many other goodies, such as hot module replacement (save your code and it updates in your browser immediately).
I've previously used a require.js with grunt workflow on a rather massive project, and the compile time was getting very slow, brittle configuration of the module dependencies.
I'm still quite new on webpack, but I'm loving it so far. With it's loaders, such as babel-loader, adopting ES6 is a breeze.
Manage a collection of Docker containers and link them together.
Here's an example of tying together several containers:
redis: build: db/redis/ mongo: build: db/mongo/ website: build: app/website/ links: - redis - mongo
Here, 3 containers are linked together, and running
docker-compose up builds all the images and starts the containers.
For development, I extend this with some more goodies:
mongo: # expose the port to the host machine so I can easily connect to it with dev tools. ports: - "27017:27017" website: # mount some folders from the host for development: # - source code # - my own node_modules (I use npm link with my own modules) volumes: - ./app/website:/nodeapp - ../modules/:/usr/local/lib/node_modules/ # expose the app and webpack-dev-server to host ports: - "8080:8080" - "8090:8090" # nodemon for server restart on code changes & webpack-dev-server to serve client modules command: > nodemon -w server/ server/index.js & npm run dev # trim logging log_driver: "json-file" log_opt: max-size: "100k" max-file: "1"
Then you can make a quick bash script:
FILES="-f infra.base.yaml -f infra.dev.yaml" if [ "$ENV" = "PROD" ]; then echo -e "Environment: \e[1;32mPRODUCTION\e[m" FILES="-f infra.base.yaml" else echo -e "Environment: \e[1;31mDEV\e[m" FILES="-f infra.base.yaml -f infra.dev.yaml" fi docker-compose $FILES $@
and use it:
and then deploy it:
export ENV=PROD ./infra up