-
Notifications
You must be signed in to change notification settings - Fork 21
Kubernetes deployment
Download config files from: https://dtqst.sharepoint.com/:f:/r/sites/FAL-CLARINDSpace/Shared%20Documents/General/[Kubernetes](https://dtqst.sharepoint.com/:f:/r/sites/FAL-CLARINDSpace/Shared%20Documents/General/Kubernetes?csf=1&web=1&e=QmNGmE)?csf=1&web=1&e=QmNGmE
Application Pod (dspace-deploy Deployment) with 3 containers in ONE pod:
- Angular UI (SSR) on container port 4000
- REST API (Tomcat/Spring Boot) on container port 8080
- Solr (standalone) on container port 8983
- Postgres: PVC (5Gi) created automatically by the StatefulSet
- Solr: NOT persistent (emptyDir) – indexes rebuilt each restart
- Single
dspace-service(NodePort) mapping all three ports:- UI: NodePort 30400 → 4000
- REST: NodePort 30808 → 8080 (API base path
/server) - Solr: NodePort 31883 → 8983 (Solr admin UI base
/solr)
-
dspace-configmap.yamlsupplieslocal.cfg, Angularenvironment.prod.ts, and SSRconfig.yml -
solr.serverpoints tohttp://localhost:8983/solrbecause Solr runs in the same pod - CORS origins include the host reverse proxy port (81) and localhost dev values
./deploy.sh
kubectl get pods -n dspace -wWait until both pods show Running (Postgres + dspace-deploy). Give an extra 2–3 minutes after all container show Ready before testing to let migrations and indexing complete.
To verify Postgres persistence after a pod restart (note StatefulSet pod name postgres-0):
kubectl -n dspace delete pod postgres-0
kubectl -n dspace get pods -w
kubectl -n dspace get pvc | grep postgres-dataIf you need to reset everything:
minikube delete
./deploy.shkubectl exec -n dspace -it <POD-NAME> -c dspace -- /bin/bash
cd /dspace/bin
./dspace create-administrator
kubectl rollout restart deployment/dspace-deploy -n dspaceFind the node IP:
minikube ipThen:
- UI:
http://<node-ip>:30400/ - REST:
http://<node-ip>:30808/server/ - Solr:
http://<node-ip>:31883/solr/
Example snippet (adjust <node-ip>):
upstream dspace_frontend { server 192.168.49.2:30400; }
upstream dspace_backend { server 192.168.49.2:30808; }
upstream dspace_solr { server 192.168.49.2:31883; }
server {
listen 81;
server_name dev-10.pc;
root /var/www/html;
client_max_body_size 3000M;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
# Common headers (safe only if no 'include proxy_params' anywhere below)
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Forwarded "host=$http_host;proto=$scheme;port=$server_port";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location /server/ { proxy_pass http://dspace_backend; }
location / { proxy_pass http://dspace_frontend; }
location /solr/ { proxy_pass http://dspace_solr/solr/; }
}http://dev-10.pc:81/
http://dev-10.pc:81/server
http://dev-10.pc:81/solr
create file kubeconfig.yaml and replace with actual token value
apiVersion: v1
kind: Config
clusters:
- name: "kuba-cluster"
cluster:
server: "https://rancher.cloud.e-infra.cz/k8s/clusters/c-m-qvndqhf6"
users:
- name: "kuba-cluster"
user:
token: ""
contexts:
- name: "kuba-cluster"
context:
user: "kuba-cluster"
cluster: "kuba-cluster"
current-context: "kuba-cluster"to deploy on rancher use following commands but the namespace in kustomization.yaml and in the kubectl apply must match:
set KUBECONFIG= path/to/kubeconfig.yaml
kubectl apply -k ./ -n misutka-ns
kubectl get all -n misutka-ns
kubectl get pods -n misutka-ns