Some technical information

Technical FAQ

As with the non-technical FAQ page, here are some questions that you may find helpful. For more details, you can download a copy of my CV. If you have a question I haven't answered, please feel free to contact me.

What programming languages are you proficient in?

I mainly work with Python, but I choose languages and tools based on what best fits the project’s needs. While I have past experience with C and Java, they’re not part of my daily toolkit anymore. For scripting and automation, I commonly use YAML, Groovy, Jinja2 and Bash. Ultimately, I’m technology-agnostic and focus on using whatever language or tool delivers the best solution.


How proficient are you with Linux?

I have been using Linux as my daily driver for the last 15 years. I grew up alongside the Ubuntu and Fedora distributions, and during my professional career, I have worked with Ubuntu Server and RHEL.


What frameworks and libraries do you commonly use?

I tend to reach for tools that are practical, well-supported, and suited to the task at hand. In production environments, I’ve used FastAPI for building APIs, OpenTelemetry for instrumentation and observability, and Kafka for event streaming and messaging pipelines.

Beyond production work, I’ve explored React for frontend interfaces and used Django in smaller internal tools and prototypes. I’ve also experimented with data science and ML frameworks like NumPy, Pandas, scikit-learn, and PyTorch as part of personal and experimental projects.

Ultimately, I don’t tie myself to a particular stack. I choose libraries and frameworks based on the needs of the project, not out of habit or preference.


How do you approach DevOps and CI/CD?

I listen to and empathize with developers. Working alongside them, I identify their pain points, and release partial solutions which I then iterate on as I receive feedback. This way, I have managed to build not only the pipelines themselves but also simple self-service portals and automations to simplify the developer's workflow as much as I can so they can focus on their work.

I’ve built pipelines using Jenkins, Drone CI, GitHub Actions, and Google CloudBuild, depending on the environment. For artifact storage, I’ve worked with Nexus, AWS ECR, and Google Artifact Storage. I also write pipelines that dynamically generate Dockerfiles for different services, which makes containerization a lot more flexible.

For deployments, I rely on Helm templates and use GitOps tools like Flux and ArgoCD with Kubernetes to keep things versioned, reproducible, and reliable. I bake in quality gates too — tools like SonarQube help catch issues early with static analysis and code smell detection.


What databases are you experienced with?

I work with both SQL and NoSQL databases, depending on the nature of the data and how it needs to be queried. I typically reach for PostgreSQL for relational data and structured storage. For time-series data, I use Prometheus, and for search-heavy workloads or analytics, Elasticsearch does the job well.

I’ve also worked with Qdrant for vector indexing and similarity search - mostly in ML-adjacent work and experimentation around semantic search.


How do you ensure code quality and maintainability?

For code quality, I rely on automated testing with PyTest, use static analysis tools like SonarQube, and enforce style with linters like Pylint.

Having said that, tools only go so far. Regular code reviews and pair programming help catch edge cases, share context, and keep the codebase understandable over time.


What cloud platforms are you familiar with?

I’ve worked with AWS and Google Cloud, using core compute, storage, serverless, identity, and Kubernetes services. My focus is on programmatic, automated, and version-controlled infrastructure.


How do you handle security in your projects?

I integrate security from the start by following best practices like avoiding storing .env files in repos and managing secrets carefully in Kubernetes. I enforce least privilege access through group permissions and IAM policies to keep systems secure.


What is your experience with microservices architecture?

I design and deploy microservices using Docker and Kubernetes. I have used Istio for service discovery, traffic management, and security. My architecture often includes RESTful APIs and gRPC for communication between services.


How do you manage infrastructure as code (IaC)?

I use Terraform and Ansible for infrastructure provisioning and configuration management. I adhere to IaC and CaC best practices by maintaining version-controlled infrastructure templates and using modules for reusability and consistency.


What deployment strategies have you used?

I primarily use rolling upgrades to deploy new application versions smoothly. While I’m familiar with blue-green and canary deployments, my hands-on production experience has mainly been with rolling upgrades.


What tools do you use for monitoring and incident response?

Integrating telemetry early helps me spot issues quickly. I instrument code with OpenTelemetry to stay vendor-neutral. For metrics, logs, and traces, I rely on a combination of Prometheus, Loki, Tempo, and Grafana. CloudWatch handles log aggregation and analysis, while Prometheus Alertmanager takes care of alerting. In previous roles, I’ve also worked with Splunk Cloud.


What is your approach to disaster recovery and high availability?

I make sure physical infrastructure is redundant at both component and network levels. On top of that, I build virtual environments designed for resilience, using clustering and failover strategies to keep services running smoothly.

Whether it’s multi-region Kubernetes clusters, automated failover for databases, or scalable cloud resources, my goal is always to minimize downtime and ensure business continuity.

Proactive monitoring and alerting help me catch issues early, so recovery is fast and seamless.


What experience do you have with real-time data processing and stream processing?

I have experience with real-time data processing using tools like Apache Kafka for message streaming and Apache Flink and Apache Spark for stream processing. I have implemented real-time analytics dashboards and alerting systems based on streaming data.