kubernetes not respecting apparmor profile?

0

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
I'm trying to deploy unifi video from ubiquiti in a pod, using this docker image: https://github.com/pducharme/UniFi-Video-Controller based on the documetation, it says I need to use "--security-opt apparmor:unconfined" to get around issues mounting tmpfs, but when I try that, it still fails.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: unifi-video
  annotations:
    container.apparmor.security.beta.kubernetes.io/unifi-video: unconfined
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: unifi-video
    spec:
      hostname: unifi-video
      nodeSelector:
        kubernetes.io/hostname: mira-b.home
      volumes:
      - name: dockerdata
        persistentVolumeClaim:
          claimName: dockerdata-nas
      - name: cameradata
        persistentVolumeClaim:
          claimName: cameras-nas
      containers:
      - name: unifi-video
        image: pducharme/unifi-video-controller:3.9.7
        securityContext:
          capabilities:
            add:
              - SYS_ADMIN
              - DAC_READ_SEARCH
        volumeMounts:
        - name: dockerdata
          subPath: unifi-video
          mountPath: /var/lib/unifi-video
        - name: cameradata
          mountPath: /nfs/cameras
        env:
        - name: PUID
          value: '1001'
        - name: PGID
          value: '1001'
        - name: TZ
          value: 'America/Los_Angeles'
        - name: DEBUG
          value: '1'
        ports:
        - name: ems-liveflv
          containerPort: 6666
        - name: ems-rtmp
          containerPort: 1935
        - name: uvcmicro-talk
          containerPort: 7004
          protocol: UDP
        - name: app-http
          containerPort: 7080
        - name: camera-mgmt
          containerPort: 7442
        - name: app-https
          containerPort: 7443
        - name: nvr-client
          containerPort: 7444
        - name: ems-livews
          containerPort: 7445
        - name: ems-livewss
          containerPort: 7446
        - name: ems-rtsp
          containerPort: 7447
        - name: video-discovery
          containerPort: 10001
          protocol: UDP
        readinessProbe:
          tcpSocket:
            port: app-https
          initialDelaySeconds: 20
          periodSeconds: 10
        livenessProbe:
          tcpSocket:
            port: app-https
          initialDelaySeconds: 40
          periodSeconds: 20

Despite this, I get the following error:

mount: tmpfs is write-protected, mounting read-only
mount: cannot mount tmpfs read-only
failed.

according to k8s docs, unconfined in annotations should prevent this from happening.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g. from /etc/os-release): Ubuntu 18.04 LTS
  • Kernel (e.g. uname -a): Linux mira-a.home 4.15.0-24-generic #26-Ubuntu SMP Wed Jun 13 08:44:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
evanrich picture evanrich  ·  15 Jul 2018

Most helpful comment

6

The annotation should go on the template, since you want it to end up on the pod object:

spec:
  template: 
    metadata:
      annotations: …
liggitt picture liggitt  ·  15 Jul 2018

All comments

0

@evanrich: There are no sig labels on this issue. Please add a sig label.

A sig label can be added by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: _bugs, feature-requests, pr-reviews, test-failures, proposals_

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot picture k8s-ci-robot  ·  15 Jul 2018
6

The annotation should go on the template, since you want it to end up on the pod object:

spec:
  template: 
    metadata:
      annotations: …
liggitt picture liggitt  ·  15 Jul 2018
0

@liggitt aww man thank you so much. Been banging my head on this for hours.

evanrich picture evanrich  ·  15 Jul 2018
0

@evanrich
@liggitt
https://github.com/kubernetes/kubernetes/issues/79265

Could you help me ?
This problem troubles me for almost a week.
Thanks ~~~~

ChenLong2014 picture ChenLong2014  ·  22 Jun 2019
3
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: unifi-video # pod's name
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        container.apparmor.security.beta.kubernetes.io/unifi-video: unconfined 
        # unifi-video is pod name
      labels:
        app: unifi-video # pod's name
lzyrapx picture lzyrapx  ·  8 Oct 2019