We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It seems currently this approach only works on collecting "events", like: message Back-off restarting failed container
without actual pod logs output that explains an error, any thoughts how this can be achieved?
yaml:
--- # Source: sentry-kubernetes/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes --- # Source: sentry-kubernetes/templates/secret.yaml apiVersion: v1 kind: Secret metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes type: Opaque data: sentry.dsn: "..." --- # Source: sentry-kubernetes/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- # Source: sentry-kubernetes/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sentry-kubernetes subjects: - kind: ServiceAccount name: sentry-kubernetes namespace: default --- # Source: sentry-kubernetes/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: sentry-kubernetes heritage: Helm release: sentry-kubernetes chart: sentry-kubernetes-0.3.2 name: sentry-kubernetes spec: replicas: selector: matchLabels: app: sentry-kubernetes template: metadata: annotations: checksum/secrets: ... labels: app: sentry-kubernetes release: sentry-kubernetes spec: containers: - name: sentry-kubernetes image: "getsentry/sentry-kubernetes:latest" imagePullPolicy: Always env: - name: DSN valueFrom: secretKeyRef: name: sentry-kubernetes key: sentry.dsn resources: {} serviceAccountName: sentry-kubernetes
received by: helm template sentry-kubernetes sentry/sentry-kubernetes --set sentry.dsn=https://... > exported-sentry-kubernetes.yaml
The text was updated successfully, but these errors were encountered:
No branches or pull requests
It seems currently this approach only works on collecting "events", like:
message
Back-off restarting failed container
without actual pod logs output that explains an error, any thoughts how this can be achieved?
yaml:
received by:
helm template sentry-kubernetes sentry/sentry-kubernetes --set sentry.dsn=https://... > exported-sentry-kubernetes.yaml
The text was updated successfully, but these errors were encountered: