AILinkedIn PostOctober 26, 20251 min read162 words

Every dev's new daily ritual: Buy tokens, Hit Generate, Pray

M

MOJAHID UL HAQUE

DevOps Engineer

9 likes1 comments3,443 views

Every dev's new daily ritual: - Buy tokens. - Hit "Generate." - Pray the AI gods are in a good mood today.

Sometimes it delivers pure poetry — clean code, perfect logic, not a bug in sight. Other times… it's chaos wearing confidence like cologne.

You tweak the prompt. You hit retry. You mumble, "It's learning…" while secretly questioning your life choices.

Let's be real — this isn't coding anymore. It's gambling with prettier syntax.

And like every casino, guess who always wins? ➡️ The cursor.

How to fix it: - Stop treating AI like a slot machine. - Treat it like a clueless intern. - Give it context before commands. - Break tasks into bite-sized goals. - Review its output like a PR, not a prophecy.

Because here's the thing — AI doesn't replace developers. It exposes them.

  • The disciplined ones get amplified.
  • The lazy ones get automated.

Your logic, your tests, your data — that's the real power.

Originally posted on LinkedIn

View original post

Related Posts

AILinkedIn PostJan 21, 2026

AI Made Us Faster. Now Who Protects the Code?

In today's AI world, almost everyone is using AI to write code. The feature runs. The API responds. The UI looks fine. So we assume the code is good. But here's the uncomfortable truth: 95% of the time, AI doesn't write a real solution — it applies a patch. It fixes the problem for now. Under the hood: - the logic is copied from somewhere else - edge cases are ignored - security is assumed, not verified - technical debt quietly increases Everything works… until it doesn't. This is exactly why tools like SonarQube matter more than ever. Not because AI is bad but because AI is too good at making broken things look correct. SonarQube forces us to slow down for one moment: - check what we're actually shipping - catch issues before production does - stop temporary fixes from becoming permanent systems AI gives speed. SonarQube brings discipline. In an AI-first world, quality doesn't happen by default. It has to be enforced.

1 min read82389
Read more →
AILinkedIn PostJan 1, 2026

Stop treating your AI models like standard microservices

Stop treating your AI models like standard microservices. They're not. And they deserve better. I did what most of us do at first. I took a production-ready model, wrapped it in a regular web framework, deployed it, and called it "done." It worked… until real traffic showed up. That's when the problems surfaced: - GPUs chilling at 30% utilization - Requests piling up - Cloud bills climbing The issue wasn't the model. It was the inference architecture. My Python service could only feed the GPU one request at a time. The GPU was starving while the app was "busy." Then I brought in NVIDIA Triton Inference Server. And everything clicked. Dynamic batching changed the game Instead of handling requests one-by-one (the normal, inefficient way), Triton acts like a traffic controller. It instantly groups incoming requests and fires them at the GPU as a single optimized batch. No manual tuning. No hacky concurrency logic. But that wasn't the only win: - True concurrency: Multiple models running on the same GPU without stepping on each other's memory. - Better hardware efficiency: Doing more work with fewer GPUs instead of throwing money at the problem. - Production-grade visibility: Real metrics instead of guessing why things feel slow. The result? - Throughput doubled - Latency stayed flat - GPU utilization jumped to 90%+ What this taught me: Normal inference: optimize the model Real MLOps: optimize the system Once inference becomes infrastructure, everything else gets easier. If you're still wrapping models like regular APIs, you're leaving performance and money on the table.

1 min read91298
Read more →
AILinkedIn PostSep 21, 2025

DevOps + MLOps = The Future of Engineering

DevOps + MLOps = The Future of Engineering As DevOps engineers, we've mastered CI/CD, automation, observability, and security. But here's the thing: AI is everywhere, and deploying machine learning models needs more than just code pipelines. That's where MLOps comes in. - DevOps ensures: smooth deployments, scalability, and monitoring. - MLOps adds: data versioning, model training pipelines, experiment tracking, drift detection, and continuous model delivery. Why DevOps folks should care: • AI is being baked into almost every modern product. • Companies prefer engineers who can bridge both DevOps & MLOps. • If you know DevOps, you're already 50% there. MLOps is just the next step. Think of it like this: CI/CD for code → DevOps CI/CD for models & data → MLOps The engineers who can do both will be in the highest demand in the coming years. And here's the reality: if you don't upgrade yourself, time will replace you. The industry moves fast, but the ones who stay curious and keep learning are the ones who stay relevant. Want to get started with MLOps? Check these out: • ml-ops.org → Community & learning hub • mlflow.org → Model lifecycle management • kubeflow.org → ML on Kubernetes • dvc.org → Data & experiment versioning • fullstackdeeplearning.com → Practical course on ML systems

1 min read90642
Read more →