Some technical information

Technical FAQ

As with the non-technical FAQ page, here are some questions that you may find helpful. For more details, you can download a copy of my CV. If you have a question I haven't answered, please feel free to contact me.

What programming languages are you proficient in?

I mainly work with Python and C#. To a lesser degree, with C and Java. Additionally, I have experience with YAML, Groovy, and Bash, which I often use in scripting and automation tasks.

I also utilize Jinja2 for templating, which allows me to create dynamic and reusable code. This is particularly useful for generating configuration files and other text-based outputs, ensuring that my code is both flexible and maintainable.


How proficient are you with Linux?

I have been using Linux as my daily driver for the last 15 years. I grew up alongside the Ubuntu and Fedora distributions, and during my professional career, I have worked with Ubuntu Server and RHEL.


What frameworks and libraries do you commonly use?

For front-end development, I use React, and for back-end development, I rely on Django. For APIs I use FastAPI. For telemetry and instrumentation, I mainly use OpenTelemetry. When working with data, I use NumPy, scikit-learn, Pandas, and TensorFlow. For messaging, I use the confluent-kafka library. My choice of libraries depends on both the language(s) as well as the specific requirements of the project. I'm flexible in selecting the most appropriate tools to meet the unique needs of each project.


What version control systems do you use?

I primarily use Git for version control, with experience in platforms like GitHub and GitLab. I follow branching strategies like Git Flow, depending on project requirements.


How do you approach DevOps and CI/CD?

I have implemented CI/CD pipelines using a plethora of technologies. I have used Jenkins, Drone CI, and GitHub Actions. For containerization, I have experience with both Docker and containerd. I utilize SonarQube for detecting code smells and ensuring code quality.

For managing deployments, I use Helm. I have also employed both Flux and ArgoCD for GitOps, and Kubernetes for orchestration. This comprehensive approach ensures efficient, reliable, and scalable DevOps practices.


What databases are you experienced with?

I work with both SQL and NoSQL databases. For my SQL needs, I mainly use PostgreSQL. Additionally, I use Prometheus for time-based databases, and Elasticsearch for search and analytics. For vector indexing and similarity search, I have experience with Qdrant.


What caching and messaging technologies do you use?

For caching, I use Redis. For message brokering, I use Apache Kafka. These technologies help ensure efficient data handling and communication within the systems I work on.


How do you ensure code quality and maintainability?

I enforce code quality through automated testing frameworks like PyTest. I use static analysis tools like SonarQube and linters (ESLint, Pylint) to maintain code standards. Code reviews and pair programming are integral to my workflow.


What cloud platforms are you familiar with?

I have experience with AWS and the Google Cloud platforms, having deployed and maintained a number of projects on them. I use services like AWS EC2, ECS, S3, Lambda, IAM Identity Center and GCP's Kubernetes Engine. I heavily interact with both platforms programmatically.


How do you handle security in your projects?

I strive to integrate security early into the development process. From the basics (not storing .env files in the repo), I also use frameworks like OWASP Top 10 for vulnerability scanning for any API work. I follow the principles of least privilege, group-based permissions, and use IAM (Identity and Access Management) policies to secure access.


What is your experience with microservices architecture?

I design and deploy microservices using Docker and Kubernetes. I use API gateways and Istio for service discovery, traffic management, and security. My architecture often includes RESTful APIs and gRPC for communication between services.


How do you manage infrastructure as code (IaC)?

I use Terraform and Ansible for infrastructure provisioning and configuration management. I adhere to IaC and CaC best practices by maintaining version-controlled infrastructure templates and using modules for reusability and consistency.


What deployment strategies have you used?

I have predominantly used rolling upgrades for deploying new versions of applications, but I am also familiar with blue-green as well as canary deployments.


What tools do you use for monitoring and incident response?

Just like security, I strive to integrate telemetry early in the development phase. For code instrumentation, I use OpenTelemetry, to enable vendor-agnostic telemetry. For monitoring, I use Prometheus, Loki, Tempo and Grafana for metrics, logs and traces, and CloudWatch for log aggregation and analysis. I implement alerting systems using Prometheus Alertmanager. I have also used Splunk Cloud in the past.


What is your approach to disaster recovery and high availability?

My approach to disaster recovery and high availability is grounded in a deep understanding of fault tolerance, redundancy, and system resilience. I have extensive experience deploying high availability (HA) architectures using a range of tools, including HAProxy and NGINX, often combined with Keepalived and VRRP (Virtual Router Redundancy Protocol) for seamless failover. Additionally, I have leveraged cloud load balancers from platforms like AWS and GCP to distribute traffic across multiple instances, ensuring scalability and availability in cloud environments.

In virtualized environments, I have implemented both VMware ESXi and Proxmox VE clusters to ensure that critical virtual machines remain operational, even during node failures. In Kubernetes, I have designed multi-master clusters and decoupled etcd from API servers to enhance fault tolerance. I’ve also deployed multi-region Kubernetes clusters, providing resilience against regional outages and ensuring service continuity across different geographic locations. Auto Scaling Groups (ASGs) have been integral in automatically adjusting resources based on demand, further bolstering high availability.

For data replication and consistency, I have employed strategies with PostgreSQL using Patroni for automatic failover, ensuring database availability and durability. For disaster recovery, I have utilized Velero with MinIO to back up and restore entire Kubernetes clusters and built Ansible playbooks for automated service switchover and failover of core services. These strategies, combined with my focus on proactive monitoring and alerting, enable me to build robust systems that are prepared for both anticipated and unforeseen disruptions.


What experience do you have with real-time data processing and stream processing?

I have experience with real-time data processing using tools like Apache Kafka for message streaming and Apache Flink and Apache Spark for stream processing. I have implemented real-time analytics dashboards and alerting systems based on streaming data.