One more time…
Let’s tie it all together now, ok?
On one end of the chain is a web page in a bog-standard apache, and on the other end is TK4 running in a container.
To describe how I built this I’ll have to start on the TK4 side of things. We want to do the deployment of TK4 with an ansible playbook now, so we take all the different pieces of our deployment yaml file, and wrap them into ansible tasks for the k8s module. The only iffy part is to get the indentation right…
Hello, COBOL, my old friend…
I’ve come to talk with you again…
Because a vision softly creeping…
Left its seeds while I was sleeping…
And the vision that was planted in my brain
Still remains
Within the sounds
of YAML
Anyway, here’s the playbook:
---
- name: deploy turnkey4 in a kubernetes container
hosts: all
become: false
tasks:
- name: create a namespace
k8s:
api_key: '{{ eregion_home_k8s_token }}'
host: '{{ eregion_home_k8s_host }}'
verify_ssl: false
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: tk4
- name: create a deployment
k8s:
api_key: '{{ eregion_home_k8s_token }}'
host: '{{ eregion_home_k8s_host }}'
verify_ssl: false
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tk4-app
namespace: tk4
spec:
replicas: 1
selector:
matchLabels:
app: tk4-app
template:
metadata:
labels:
app: tk4-app
spec:
containers:
- name: tk4-app
image: rattydave/docker-ubuntu-hercules-mvs:latest
resources:
limits:
cpu: "0.25"
memory: "256Mi"
env:
- name: NUMCPU
value: "1"
- name: MAXCPU
value: "1"
ports:
- containerPort: 3270
- containerPort: 8038
- name: create a service
k8s:
api_key: '{{ eregion_home_k8s_token }}'
host: '{{ eregion_home_k8s_host }}'
verify_ssl: false
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: tk4-svc
namespace: tk4
labels:
app: tk4-app
spec:
ports:
- port: 8038
targetport: 8038
name: tk4-web
protocol: TCP
- port: 3270
targetport: 3270
name: tk4-telnet
protocol: TCP
selector:
app: tk4-app
- name: create a traefik ingress route for port 8038
k8s:
api_key: '{{ eregion_home_k8s_token }}'
host: '{{ eregion_home_k8s_host }}'
verify_ssl: false
state: present
definition:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: tk4
spec:
entryPoints:
- websecure
routes:
- match: Host(`tk4.apps.eregion.home`)
kind: Rule
services:
- name: tk4-svc
port: 8038
tls: {}
- name: create a traefik ingress route for TCP port 3270
k8s:
api_key: '{{ eregion_home_k8s_token }}'
host: '{{ eregion_home_k8s_host }}'
verify_ssl: false
state: present
definition:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: ingressroutetcp3270
namespace: tk4
spec:
entryPoints:
- x3270
routes:
- match: HostSNI(`*`)
services:
- name: tk4-svc
port: 3270
Once this works I create a job template with it on my AWX, together with a user account which can’t do anything other than run jobs. Obviously that job needs to target a host that has the python3-openshift module with all dependencies installed, and the host you use for it needs to have the ansible variables for api_key and host set as host vars in AWX.
Then, a php script that launches that job through the REST api of awx (Don’t forget the PHP tags at the start and end of this file – I had to strip them out so wordpress wouldn’t choke on them):
if ($_SERVER["HTTP_HOST"]!="akari.my.lan")
{
return null;
}
//url
$url = 'https://awx.apps.my.lan/api/v2/job_templates/41/launch/';
//Credentials
$client_id = "***";
$client_pass= "***";
//HTTP options
$opts = array(
'http' =>
array(
'method' => 'POST',
'header' => array ('Content-type: application/json', 'Authorization: Basic '.base64_encode("$client_id:$client_pass")),
),
'ssl' =>
array(
'verify_peer' => false
)
);
//Do request
$context = stream_context_create($opts);
$json = file_get_contents($url, false, $context);
$result = json_decode($json, true);
if(json_last_error() != JSON_ERROR_NONE){
return null;
}
Header('Location: '.$_SERVER["HTTP_REFERER"]);
To find the launch url for any given job template you browse to /api/v2/ on your awx, and look for it – or you can grab the job ID by using your AWX “the normal way” and look at the links in the UI.
To scale down my TK4 when I’m not using it I use a version of the same playbook that sets the number of desired replicas to 0, together with a version of my php script that calls that job instead.
Now all I need to find is a web-based 3270 terminal.
On a not completely unrelated note, the start page of my internal web server here looks like this:
Yep, I don’t have anything better to do.
I think one long term project is going to be something about game servers in containers – counterstrike, etc etc – and then plug them into that startpage via AWX in just the same way as my “mainframe on demand”.
I guess it sort of ends here.
1 thought on “From 0 to Kubernetes – Building an app portal”