The Complete Guide to AI-Powered Technical Interviews: How to Assess Real Developer Skills
The traditional technical interview is broken. Studies show that 87% of hiring managers struggle to accurately assess developer skills through conventional methods, while 73% of qualified candidates are rejected due to interview bias and inconsistent evaluation processes. As the global demand for technical talent reaches unprecedented levels, companies need a revolutionary approach to identify and hire the best developers.
Enter AI-powered technical interviews—a game-changing methodology that’s transforming how organizations assess real developer skills. This comprehensive guide explores how artificial intelligence is reshaping technical recruitment, providing you with actionable strategies to build stronger development teams faster and more efficiently.
The Evolution of Technical Hiring
The Traditional Technical Interview Problem
Traditional technical interviews have remained largely unchanged for decades, relying on whiteboard coding sessions, theoretical computer science questions, and subjective evaluations. This outdated approach creates several critical issues:
Inconsistency in Evaluation: Different interviewers assess the same candidate differently, leading to unreliable hiring decisions. Research from Google’s People Analytics team found that traditional interview scores varied by up to 40% between different interviewers for identical candidate performance.
Interview Anxiety Impact: The artificial pressure of coding on a whiteboard while being watched doesn’t reflect real-world development scenarios. Studies indicate that 68% of competent developers perform significantly worse in traditional interview settings compared to their actual job performance.
Scalability Limitations: As companies grow, maintaining consistent interview quality becomes increasingly difficult. Manual interview processes create bottlenecks that slow down hiring velocity, particularly problematic in competitive talent markets.
Bias and Subjectivity: Human interviewers unconsciously introduce biases based on educational background, previous company names, communication style, and even physical appearance. These biases disproportionately affect underrepresented groups in technology.
The Rise of AI-Powered Solutions
The emergence of artificial intelligence in technical hiring addresses these fundamental problems by introducing objectivity, consistency, and scalability to the assessment process. AI-powered technical interviews leverage machine learning algorithms, natural language processing, and automated code analysis to evaluate candidates more accurately and fairly.
Modern AI assessment platforms can analyze thousands of data points from a single coding session, including:
- Code quality and efficiency
- Problem-solving approach and methodology
- Debugging skills and error handling
- Communication patterns and technical explanations
- Collaboration potential through pair programming simulations
Understanding AI-Powered Technical Interviews
What Are AI-Powered Technical Interviews?
AI-powered technical interviews use artificial intelligence and machine learning technologies to automate, enhance, and standardize the technical assessment process. These systems go beyond simple automated testing to provide comprehensive evaluation of a candidate’s technical abilities, problem-solving skills, and potential for success in specific roles.
Key characteristics include:
Automated Code Analysis: AI algorithms analyze code quality, efficiency, and best practices adherence in real-time, providing instant feedback on technical competency.
Behavioral Pattern Recognition: Machine learning models identify problem-solving patterns, debugging approaches, and coding habits that correlate with successful developer performance.
Natural Language Processing: Advanced NLP capabilities evaluate technical communication skills, code documentation quality, and ability to explain complex concepts clearly.
Adaptive Testing: AI systems adjust question difficulty and focus areas based on candidate responses, ensuring optimal challenge levels for accurate skill assessment.
Types of AI Assessment Technologies
Code Quality Analysis AI
Modern code quality analysis systems represent a sophisticated approach to evaluating developer competency through comprehensive code examination. These AI-powered platforms go far beyond simple syntax checking to provide deep insights into coding practices and technical proficiency. The technology analyzes algorithmic efficiency by examining time complexity patterns and identifying optimal solution paths that demonstrate advanced problem-solving capabilities. Code readability assessment occurs through machine learning models trained on thousands of high-quality code samples, enabling the system to recognize clear variable naming conventions, logical code organization, and proper implementation of design patterns. Security vulnerability detection operates continuously throughout the assessment, scanning for common security flaws and identifying potential risks that could compromise application integrity. The system also evaluates maintainability metrics by analyzing code structure, documentation quality, and adherence to industry-specific coding standards that ensure long-term project success.
Problem-Solving Pattern AI
The most advanced AI assessment systems excel at understanding how developers think and approach complex technical challenges. These platforms analyze solution strategy identification by observing the candidate’s initial approach to problem decomposition and their ability to break complex requirements into manageable components. Machine learning algorithms track debugging methodology throughout the assessment process, identifying patterns in how candidates isolate issues, formulate hypotheses, and systematically test potential solutions. Optimization technique evaluation occurs through continuous monitoring of how candidates refine their approaches when presented with performance constraints or resource limitations. The technology also captures error recovery and adaptation patterns, measuring how effectively candidates learn from mistakes, incorporate feedback, and adjust their problem-solving strategies when initial approaches prove insufficient. This behavioral analysis provides insights into developer resilience and adaptability that traditional interviews cannot capture.
Communication Assessment AI
Technical roles increasingly require strong communication skills, particularly in collaborative development environments where clear explanation of complex concepts becomes essential. Natural language processing technology evaluates code explanation clarity by analyzing how candidates describe their implementation choices, algorithmic decisions, and architectural considerations. The AI system assesses technical documentation quality through automated analysis of code comments, README files, and explanation narratives that candidates provide during the assessment process. Advanced language models identify question-asking and clarification patterns that demonstrate active engagement and professional communication habits essential for team collaboration. The technology also evaluates collaboration indicators by analyzing how candidates respond to feedback, incorporate suggestions from simulated team interactions, and demonstrate willingness to explain their work to others with varying technical backgrounds.
Predictive Performance AI
The most sophisticated AI assessment platforms incorporate predictive analytics capabilities that forecast candidate success and organizational fit based on comprehensive performance data analysis. These systems calculate success probability for specific roles by comparing candidate assessment results with historical performance data from successful employees in similar positions. Machine learning models trained on extensive datasets can predict team fit and collaboration potential through analysis of multiple behavioral indicators:
- Communication patterns during technical discussions
- Problem-solving approaches under time constraints
- Technical decision-making processes and rationale
- Response patterns to constructive feedback
- Collaboration style preferences and adaptability
- Leadership potential demonstrations during group exercises
Learning curve analysis examines how quickly candidates adapt to new information, incorporate feedback, and demonstrate skill improvement throughout the assessment process. Long-term performance trajectory forecasting utilizes advanced algorithms to predict career development potential, leadership capabilities, and technical growth patterns that align with organizational needs and advancement opportunities.
Core Components of AI Assessment Systems
1. Intelligent Code Evaluation Engine
The foundation of any AI-powered technical interview system is its code evaluation engine. This component analyzes submitted code across multiple dimensions to provide comprehensive technical assessment.
Syntax and Structure Analysis
- Proper language syntax usage and convention adherence
- Code organization and modular design principles
- Variable naming consistency and clarity
- Function and class structure optimization
Algorithmic Efficiency Assessment
- Time complexity analysis for optimal solution paths
- Space complexity evaluation for memory usage efficiency
- Algorithm selection appropriateness for given problems
- Performance optimization potential identification
Best Practices Compliance
- Security best practices implementation
- Error handling and edge case management
- Code documentation and comment quality
- Industry-specific coding standards adherence
2. Behavioral Analytics Platform
AI systems excel at identifying subtle behavioral patterns that traditional interviews miss. These platforms analyze candidate behavior throughout the assessment process.
Problem-Solving Approach Analysis
- Initial problem understanding and clarification patterns
- Solution planning and strategy development methods
- Implementation approach and priority setting
- Testing and validation methodology usage
Debugging and Error Resolution
- Error identification speed and accuracy
- Debugging tool usage and effectiveness
- Problem isolation and root cause analysis skills
- Solution iteration and improvement patterns
Adaptability and Learning Indicators
- Response to hint utilization and guidance
- Ability to modify approach based on new information
- Learning from mistakes and avoiding repetition
- Flexibility in switching between different solution strategies
3. Communication Intelligence Module
Technical roles require strong communication skills, particularly in collaborative development environments. AI assessment systems evaluate communication through multiple channels.
Technical Explanation Quality
- Clarity and accuracy of code explanation
- Appropriate technical vocabulary usage
- Complex concept simplification abilities
- Logical flow and structure in explanations
Interactive Communication Assessment
- Question-asking quality and relevance
- Active listening and comprehension demonstration
- Collaborative problem-solving participation
- Feedback reception and implementation
4. Predictive Analytics Framework
The most advanced AI interview systems incorporate predictive analytics to forecast candidate success and fit within specific organizational contexts.
Performance Prediction Models
- Job performance success probability calculations
- Productivity ramp-up time estimations
- Skill development potential assessments
- Career advancement trajectory predictions
Cultural and Team Fit Analysis
- Communication style compatibility evaluation
- Collaboration preference identification
- Work methodology alignment assessment
- Leadership potential and mentoring capability analysis
Benefits Over Traditional Methods {#benefits}
1. Objective and Consistent Evaluation
AI-powered systems eliminate human bias and subjectivity from technical assessments, providing consistent evaluation criteria for all candidates regardless of background, demographics, or interview timing.
Standardized Assessment Criteria: Every candidate faces identical evaluation standards, ensuring fair comparison and reducing discrimination risks.
Bias Reduction: AI algorithms focus on technical competency rather than subjective impressions, significantly reducing unconscious bias in hiring decisions.
Consistent Quality: Unlike human interviewers who may have varying expertise levels or off days, AI systems maintain consistent evaluation quality across all assessments.
2. Scalability and Efficiency
Traditional technical interviews require significant human resources and coordination. AI-powered systems dramatically improve hiring scalability while reducing time-to-hire.
Parallel Processing: AI systems can evaluate multiple candidates simultaneously, eliminating scheduling bottlenecks common in traditional interview processes.
24/7 Availability: Candidates can complete assessments at their convenience, improving candidate experience and reducing scheduling friction.
Reduced Resource Requirements: Organizations can assess more candidates with fewer human resources, allowing hiring teams to focus on final decision-making and culture fit evaluation.
3. Deeper Technical Insights
AI systems analyze candidate performance at levels impossible for human evaluation, providing comprehensive technical insights that improve hiring decisions.
Multi-Dimensional Analysis: Simultaneous evaluation of code quality, problem-solving approach, communication skills, and behavioral patterns provides holistic candidate assessment.
Real-Time Performance Tracking: AI systems monitor candidate performance throughout the assessment, identifying patterns and trends that single-moment evaluations miss.
Predictive Capabilities: Machine learning models predict future performance based on assessment results, helping organizations make more informed hiring decisions.
4. Enhanced Candidate Experience
Well-designed AI assessment systems improve the candidate experience by providing fair, consistent, and convenient evaluation processes.
Reduced Interview Anxiety: Candidates can focus on demonstrating their skills without the pressure of being directly observed by human evaluators.
Immediate Feedback: Many AI systems provide instant feedback on performance, helping candidates understand their strengths and areas for improvement.
Flexible Scheduling: On-demand assessment availability accommodates candidate schedules and reduces interview process friction.
Implementation Framework
Phase 1: Assessment Strategy Development
Before implementing AI-powered technical interviews, organizations must develop a comprehensive assessment strategy aligned with their specific hiring needs and organizational goals.
Define Role Requirements
- Identify critical technical skills for each development role
- Determine required experience levels and specialization areas
- Establish performance expectations and success metrics
- Document collaboration and communication requirements
Establish Evaluation Criteria
- Create weighted scoring systems for different skill areas
- Define minimum acceptable performance thresholds
- Develop role-specific assessment frameworks
- Align evaluation criteria with actual job responsibilities
Select Assessment Types
- Choose appropriate technical challenges and problem types
- Determine assessment duration and complexity levels
- Select programming languages and technology stacks
- Define real-world scenario simulations
Phase 2: Technology Platform Selection
Choosing the right AI assessment platform is crucial for implementation success. Organizations should evaluate platforms based on technical capabilities, integration options, and alignment with hiring objectives.
Core Platform Capabilities
- Code analysis depth and accuracy
- Supported programming languages and frameworks
- Assessment customization options
- Reporting and analytics features
Integration Requirements
- Applicant Tracking System (ATS) compatibility
- Human Resources Information System (HRIS) integration
- Single sign-on and security compliance
- API availability and documentation quality
Scalability and Performance
- Concurrent user capacity
- Assessment processing speed
- Uptime reliability and support quality
- Data storage and security measures
Phase 3: Pilot Program Implementation
Start with a limited pilot program to test AI assessment effectiveness and refine processes before full-scale deployment.
Pilot Scope Definition
- Select specific roles and departments for initial testing
- Define pilot duration and success metrics
- Establish feedback collection mechanisms
- Create rollback plans for potential issues
Parallel Testing Strategy
- Run AI assessments alongside traditional interviews
- Compare results and identify correlation patterns
- Gather hiring manager and candidate feedback
- Measure time-to-hire and quality improvements
Process Refinement
- Adjust assessment parameters based on pilot results
- Refine evaluation criteria and scoring algorithms
- Optimize candidate experience based on feedback
- Update training materials and documentation
Phase 4: Full-Scale Deployment
After successful pilot validation, organizations can proceed with comprehensive AI assessment implementation across all relevant hiring processes.
Rollout Planning
- Develop phased deployment schedule by department
- Create comprehensive training programs for hiring teams
- Establish support processes and escalation procedures
- Implement change management strategies
Quality Assurance
- Monitor assessment accuracy and reliability
- Track candidate satisfaction and feedback
- Measure hiring outcome improvements
- Continuously optimize system parameters
Best Practices for AI-Driven Assessments
1. Design Realistic Assessment Scenarios
Create assessment scenarios that closely mirror actual job responsibilities and challenges developers will face in their roles.
Project-Based Challenges: Instead of isolated coding problems, design mini-projects that require multiple skills and realistic problem-solving approaches.
Technology Stack Alignment: Use the same programming languages, frameworks, and tools that candidates will work with in their actual roles.
Collaborative Elements: Include pair programming simulations or code review exercises that assess teamwork and communication skills.
Time-Appropriate Complexity: Ensure assessment difficulty and duration align with role expectations and seniority levels.
2. Maintain Human Oversight
While AI provides objective evaluation, human judgment remains crucial for final hiring decisions and process optimization.
Human-AI Collaboration: Use AI assessments to inform rather than replace human decision-making, combining objective technical evaluation with cultural fit assessment.
Regular Algorithm Auditing: Periodically review AI assessment results for accuracy, bias, and alignment with actual job performance outcomes.
Continuous Calibration: Compare AI assessments with on-the-job performance to continuously improve algorithm accuracy and relevance.
Exception Handling: Establish processes for handling edge cases or unusual situations that AI systems may not evaluate appropriately.
3. Focus on Candidate Experience
Prioritize positive candidate experience throughout the AI assessment process to maintain employer brand strength and attract top talent.
Clear Communication: Provide detailed information about the assessment process, expectations, and evaluation criteria before candidates begin.
Technical Support: Offer robust technical support to help candidates navigate the assessment platform and resolve any technical issues.
Feedback Provision: Share constructive feedback with candidates about their performance, regardless of hiring decision outcomes.
Accessibility Considerations: Ensure assessment platforms accommodate candidates with disabilities and provide alternative evaluation methods when necessary.
4. Continuous Improvement and Optimization
Implement systematic processes for ongoing AI assessment system improvement and optimization.
Performance Tracking: Monitor key metrics including candidate satisfaction, assessment accuracy, time-to-hire, and quality of hire improvements.
Regular Updates: Keep assessment content current with evolving technology trends and industry best practices.
Stakeholder Feedback: Regularly collect and act on feedback from candidates, hiring managers, and development team members.
Competitive Analysis: Stay informed about AI assessment technology advances and competitor practices to maintain competitive advantage.
Overcoming Common Challenges
Challenge 1: Algorithm Bias and Fairness
Problem: AI systems can perpetuate or amplify existing biases present in training data or assessment design.
Solutions:
- Diverse Training Data: Ensure training datasets represent diverse developer backgrounds, experiences, and approaches
- Bias Testing: Regularly test assessment outcomes across different demographic groups to identify potential bias
- Algorithm Transparency: Use explainable AI techniques to understand how decisions are made and identify bias sources
- Inclusive Design: Involve diverse teams in assessment design and validation to catch potential bias early
Challenge 2: Technical Implementation Complexity
Problem: Integrating AI assessment systems with existing hiring infrastructure can be technically challenging.
Solutions:
- Phased Integration: Implement AI assessments gradually, starting with standalone systems before full integration
- API-First Approach: Choose platforms with robust APIs for seamless integration with existing systems
- Technical Expertise: Invest in technical training for HR teams or partner with implementation specialists
- Backup Systems: Maintain traditional assessment methods as backup during transition periods
Challenge 3: Candidate Resistance and Skepticism
Problem: Candidates may be skeptical of AI evaluation or prefer traditional human interaction.
Solutions:
- Transparent Communication: Clearly explain the benefits and fairness of AI assessment to candidates
- Hybrid Approaches: Combine AI assessments with human interaction for final rounds or cultural fit evaluation
- Success Stories: Share positive outcomes and improvements achieved through AI assessment implementation
- Opt-Out Options: Provide alternative assessment methods for candidates who strongly prefer traditional approaches
Challenge 4: Keeping Assessments Current
Problem: Technology evolves rapidly, and assessment content can quickly become outdated.
Solutions:
- Regular Content Updates: Establish quarterly review cycles for assessment content and evaluation criteria
- Industry Monitoring: Stay current with technology trends and emerging skills requirements
- Expert Advisory: Maintain relationships with senior developers who can advise on assessment relevance and accuracy
- Automated Updates: Use AI systems that can automatically incorporate new assessment patterns and technologies
Real-World Case Studies
Case Study 1: FinTech Startup Scales Technical Team 300%
Company Background: A Series B fintech startup needed to grow their development team from 15 to 45 developers within six months to meet product delivery deadlines.
Challenge: Traditional interview processes were too slow and inconsistent, leading to poor hiring decisions and extended time-to-fill for critical positions.
AI Implementation:
- Deployed comprehensive AI assessment platform covering full-stack development skills
- Created role-specific evaluation criteria for backend, frontend, and DevOps positions
- Implemented automated initial screening to filter qualified candidates efficiently
Results:
- Reduced average time-to-hire from 45 days to 18 days
- Improved interview-to-offer conversion rate by 67%
- Achieved 95% new hire retention rate after 12 months
- Decreased hiring cost per developer by 43%
Key Success Factors:
- Thorough assessment customization for specific role requirements
- Strong change management and training for hiring team
- Continuous optimization based on hiring outcome data
Case Study 2: Enterprise Software Company Eliminates Bias
Company Background: A large enterprise software company identified significant bias in their technical hiring process, with underrepresentation of women and minority candidates in their development teams.
Challenge: Traditional interviews showed inconsistent evaluation patterns that disadvantaged certain candidate groups, despite equal technical competency.
AI Implementation:
- Implemented blind AI assessment system focusing purely on technical skills demonstration
- Removed identifying information from initial assessment phases
- Created standardized evaluation criteria applied consistently across all candidates
Results:
- Increased diversity hire percentage by 89% within 18 months
- Improved technical assessment accuracy correlation with job performance by 56%
- Reduced interview process complaints and legal risks
- Enhanced employer brand reputation and candidate attraction
Key Success Factors:
- Leadership commitment to diversity and inclusion goals
- Comprehensive bias testing and algorithm validation
- Transparent communication about fairness improvements
Case Study 3: Remote-First Company Optimizes Global Hiring
Company Background: A distributed software company hiring developers across multiple time zones and countries faced coordination challenges with traditional interview scheduling.
Challenge: Time zone differences made scheduling difficult, while varying interview quality across different regions led to inconsistent hiring decisions.
AI Implementation:
- Deployed 24/7 accessible AI assessment platform with multi-language support
- Standardized technical evaluation across all global hiring locations
- Implemented automated screening and ranking systems
Results:
- Expanded candidate pool by 240% through improved accessibility
- Reduced scheduling coordination time by 78%
- Achieved consistent hiring quality across all global offices
- Improved candidate experience scores by 52%
Key Success Factors:
- Platform accessibility and multi-language capabilities
- Consistent assessment standards across all regions
- Strong technical support for global candidate base
Future of AI in Technical Hiring
Emerging Technologies and Trends
Advanced Natural Language Processing: Next-generation NLP will enable more sophisticated evaluation of technical communication, documentation skills, and collaborative abilities.
Computer Vision Integration: AI systems will analyze candidate behavior, problem-solving patterns, and engagement levels through video assessment capabilities.
Predictive Analytics Enhancement: Machine learning models will become more accurate at predicting long-term performance, career trajectory, and team fit.
Real-Time Collaboration Assessment: AI will evaluate candidates through simulated team environments and real-time collaboration scenarios.
Integration with Development Workflows
Continuous Assessment: AI systems will extend beyond hiring to provide ongoing skill assessment and development recommendations for existing team members.
Project-Based Evaluation: Future systems will assess candidates through actual project contributions, providing more realistic performance indicators.
Peer Review Integration: AI will incorporate peer feedback and collaborative assessment data to provide comprehensive evaluation perspectives.
Ethical AI Development
Explainable AI Standards: Industry standards will emerge requiring transparent, explainable AI assessment systems that candidates and organizations can understand and trust.
Bias Prevention Protocols: Advanced bias detection and prevention mechanisms will become standard features in all AI assessment platforms.
Candidate Rights Protection: Regulatory frameworks will establish candidate rights regarding AI assessment, including explanation rights and appeal processes.
Getting Started: Your Action Plan
Step 1: Assess Your Current Hiring Process (Week 1-2)
Audit Existing Methods:
- Document current technical interview processes and timelines
- Identify pain points, bottlenecks, and inconsistencies
- Measure current hiring metrics (time-to-hire, quality of hire, candidate satisfaction)
- Gather feedback from hiring managers and recent candidates
Define Improvement Goals:
- Set specific targets for time-to-hire reduction
- Establish quality of hire improvement objectives
- Define candidate experience enhancement goals
- Create diversity and inclusion improvement targets
Step 2: Build Internal Support (Week 2-4)
Stakeholder Alignment:
- Present business case for AI assessment implementation to leadership
- Secure budget approval and resource allocation
- Identify internal champions and change management leaders
- Address concerns and resistance from hiring team members
Team Preparation:
- Provide AI assessment education and training for hiring teams
- Establish new role responsibilities and workflow processes
- Create communication plans for candidate and internal stakeholder updates
- Develop success metrics and measurement frameworks
Step 3: Vendor Selection and Pilot Design (Week 4-8)
Platform Evaluation:
- Research and evaluate AI assessment platform options
- Conduct vendor demonstrations and proof-of-concept tests
- Compare features, pricing, and integration capabilities
- Check references and case studies from similar organizations
Pilot Program Design:
- Select specific roles and departments for initial testing
- Define pilot duration, scope, and success criteria
- Create parallel testing methodology comparing AI to traditional methods
- Establish feedback collection and analysis processes
Step 4: Implementation and Optimization (Week 8-16)
Pilot Launch:
- Implement AI assessment system for selected roles
- Monitor system performance and candidate feedback closely
- Track hiring metrics and compare to baseline performance
- Make real-time adjustments based on initial results
Analysis and Refinement:
- Analyze pilot results against established success criteria
- Identify areas for improvement and optimization
- Refine assessment parameters and evaluation criteria
- Prepare recommendations for full-scale deployment
Step 5: Full Deployment and Scaling (Week 16+)
Rollout Planning:
- Develop comprehensive deployment schedule for all relevant roles
- Update job postings and candidate communication materials
- Train all hiring team members on new processes and systems
- Establish ongoing support and maintenance procedures
Continuous Improvement:
- Implement regular performance monitoring and optimization cycles
- Gather ongoing feedback from candidates and hiring teams
- Stay current with AI assessment technology developments
- Expand AI assessment capabilities based on success and needs
Conclusion
AI-powered technical interviews represent a fundamental shift in how organizations identify and hire exceptional developer talent. By leveraging artificial intelligence to provide objective, consistent, and comprehensive technical assessment, companies can build stronger development teams while creating fairer, more efficient hiring processes.
The benefits are clear: reduced bias, improved scalability, deeper technical insights, and enhanced candidate experiences. However, successful implementation requires careful planning, appropriate technology selection, and commitment to continuous improvement and optimization.
As the competition for technical talent intensifies, organizations that embrace AI-powered assessment will gain significant advantages in attracting, evaluating, and hiring the best developers. The future of technical hiring is here—and it’s powered by artificial intelligence.
The transformation won’t happen overnight, but with the right strategy, tools, and commitment, your organization can revolutionize its technical hiring process and build the exceptional development team that will drive your business forward.
About Axiom Pro: Axiom Pro is a leading AI-powered recruitment and IT outstaffing company specializing in connecting organizations with top-tier technical talent. Our innovative assessment platform helps companies identify and hire the best developers while reducing costs by up to 78% and improving hiring quality.
Ready to transform your technical hiring process? Contact Axiom Proм to learn how AI-powered assessments can help you build exceptional development teams faster and more effectively.
Who are you looking for?
Schedule call and get it done better