Senior - AI Engineer
Date:
26 Sept 2025
Location:
MITC, Kandivli, MITC, Kandivli, IN
Company:
Mahindra & Mahindra Ltd
About the Team:
Mahindra AI Division is the AI innovation arm within Mahindra Group. The division experiments with state-of-the-art AI technologies and is focused on developing and delivering cutting-edge AI products to the group. With the division’s team-expansion plan, there is an excellent opportunity for your career growth in the team.
Responsibilities:
- Design, develop, and maintain backend services and APIs to support AI/ML applications at scale.
- Productionize ML and deep learning models, ensuring efficiency, reliability, and low-latency inference.
- Build scalable microservices and REST/gRPC APIs for deploying AI solutions in enterprise environments.
- Optimize AI model serving using frameworks such as TensorRT, ONNX Runtime, Triton Inference Server, or FastAPI.
- Implement MLOps practices including model versioning, monitoring, and automated deployments.
- Collaborate with AI researchers, data scientists, and product managers to translate prototypes into production-ready systems.
- Ensure robustness, fault tolerance, and security in backend AI systems.
- Integrate AI services with enterprise data platforms, cloud systems, and third-party APIs.
- Contribute to architecture discussions, design reviews, and performance tuning.
- Mentor junior engineers and contribute to best practices in AI software engineering.
Educational & Experience requirements
- Bachelor’s degree with at least 7 years of experience or Master’s with at least 4 years of experience in Computer Science, Software Engineering, Data Science, or related fields.
- Preferred to have degree from a Tier-1/2 institute (IIT/IISc/NITs if studied in India) or a globally top-ranked university (as per QS).
Technical Requirements
- Strong proficiency in Python and backend frameworks (FastAPI).
- Expertise in Prompt engineering and working with various LLMs.
- Experience in productionizing AI/ML models with efficient inference pipelines.
- Hands-on experience with model deployment frameworks (Triton, TensorRT, TorchServe, ONNX Runtime).
- Knowledge of cloud platforms (Azure, GCP) and container technologies (Docker).
- Strong experience with microservices architecture, CI/CD pipelines, and monitoring tools (Prometheus, Grafana).
- Familiarity with databases (SQL/NoSQL) and scalable data storage solutions.
- Exposure to LLMs, SLMs, and GenAI model integration into backend systems is a strong plus.
- Understanding of security, authentication, and performance optimization in large-scale systems.
- Experience with version control (Git) and Agile development practices.
Behavioral Requirements
- Excellent problem-solving skills and attention to detail.
- Strong written and verbal communication skills in English.
- Ability to work collaboratively in cross-functional teams.
- Passion for building reliable backend systems that bring AI models into real-world impact.
Job Segment:
Software Engineer, Engineer, Engineering