Skip to content

contiv-netmaster always Pending #1162

@amwork2010

Description

@amwork2010

docker 18.09
kubeadm 1.13

kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=192.168.55.31
git clone https://github.com/contiv/netplugin
cd netplugin/install/k8s/contiv
./contiv-compose use-release --k8s-api https://192.168.55.31:6443 -v $(cat ../../../version/CURRENT_VERSION) ./contiv-base.yaml > ./contiv.yaml
kubectl apply -f contiv.yaml

kubectl get po --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system contiv-netmaster-vjsvd 0/1 Pending 0 9m32s
kube-system coredns-86c58d9df4-hzt6d 0/1 Pending 0 55m
kube-system coredns-86c58d9df4-zwn9d 0/1 Pending 0 55m
kube-system etcd-kubecontiv1 1/1 Running 0 55m
kube-system kube-apiserver-kubecontiv1 1/1 Running 0 55m
kube-system kube-controller-manager-kubecontiv1 1/1 Running 0 55m
kube-system kube-proxy-f79dv 1/1 Running 0 55m
kube-system kube-scheduler-kubecontiv1 1/1 Running 0 55m

kubectl describe pod contiv-netmaster-vjsvd -n kube-system

Name: contiv-netmaster-vjsvd
Namespace: kube-system
Priority: 0
PriorityClassName:
Node:
Labels: k8s-app=contiv-netmaster
Annotations: scheduler.alpha.kubernetes.io/critical-pod:
Status: Pending
IP:
Controlled By: ReplicaSet/contiv-netmaster
Init Containers:
contiv-netplugin-init:
Image: contiv/netplugin-init:latest
Port:
Host Port:
Environment:
CONTIV_ROLE: netmaster
CONTIV_MODE: <set to the key 'contiv_mode' of config map 'contiv-config'> Optional: false
CONTIV_K8S_CONFIG: <set to the key 'contiv_k8s_config' of config map 'contiv-config'> Optional: false
CONTIV_CNI_CONFIG: <set to the key 'contiv_cni_config' of config map 'contiv-config'> Optional: false
Mounts:
/var/contiv from var-contiv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro)
contiv-netctl:
Image: contiv/netplugin:latest
Port:
Host Port:
Command:
cp
/contiv/bin/netctl
/usr/local/sbin/netctl
Environment:
Mounts:
/usr/local/sbin/ from usr-local-sbin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro)
Containers:
contiv-netmaster:
Image: contiv/netplugin:latest
Port:
Host Port:
Environment:
CONTIV_ROLE: netmaster
CONTIV_NETMASTER_MODE: <set to the key 'contiv_mode' of config map 'contiv-config'> Optional: false
CONTIV_NETMASTER_ETCD_ENDPOINTS: <set to the key 'contiv_etcd' of config map 'contiv-config'> Optional: false
CONTIV_NETMASTER_FORWARD_MODE: <set to the key 'contiv_fwdmode' of config map 'contiv-config'> Optional: false
CONTIV_NETMASTER_NET_MODE: <set to the key 'contiv_netmode' of config map 'contiv-config'> Optional: false
Mounts:
/var/contiv from var-contiv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
var-contiv:
Type: HostPath (bare host directory volume)
Path: /var/contiv
HostPathType:
usr-local-sbin:
Type: HostPath (bare host directory volume)
Path: /usr/local/sbin/
HostPathType:
contiv-netmaster-token-ts9pl:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netmaster-token-ts9pl
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 2m6s (x64 over 12m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

How to solve this problem?
Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions