Back to Insights

2026-02-22 | 6 min read

How I Built SmartSDLC with IBM Granite AI

A practical breakdown of requirement classification, code generation, bug fixing, test creation, and summarization in one Gradio workflow.

Key Takeaways

  • Designed a 6-module workflow in Gradio for requirement analysis to developer-ready output.
  • Reached 95%+ requirement classification accuracy in PDF-driven validation runs.
  • Reduced manual QA effort by around 60% by generating first-pass pytest cases.

SmartSDLC started as a capstone challenge: convert requirement-heavy documents into usable engineering outputs without wasting developer cycles. I built the workflow around IBM Granite prompts and lightweight Gradio screens so each module could be tested independently and improved quickly.

The system follows a simple sequence. First, requirement text is extracted and classified. Then code generation and bug-fix modules provide implementation support. Finally, the test generator and summarizer create review artifacts so developers can move faster without skipping quality checks.

The biggest win was consistency. Instead of manually writing test scaffolding every time, the generator produces first-pass pytest cases in seconds. This brought QA preparation effort down by roughly 60 percent in repeated usage trials and helped me standardize output quality across modules.

If I continue this project for production usage, the next step is deployment on Hugging Face Spaces with a tighter prompt versioning workflow so model updates remain traceable.