All 16 Features
Each feature in Navatom Evaluation & Feedback is built for everyday maritime operations — designed to be simple to use, fast to navigate, and fully integrated with the rest of the Navatom platform.
Multi-Criteria Evaluation Models

Build comprehensive evaluation templates with hierarchical criteria, scored questions, and open-ended inquiries. Each evaluation model organizes assessment dimensions into criteria groups, with individual questions carrying configurable weight factors that determine their influence on the final score.
Design models that reflect your organization's specific competency frameworks — from technical skills and safety awareness to communication and leadership. The weighted scoring system ensures that critical assessment dimensions carry proportionally more influence than supplementary ones, producing results that accurately reflect your priorities.
- Hierarchical criteria with scored questions
- Per-question configurable weight factors
- Open-ended inquiry questions
- Reusable model templates
- Composite weighted score calculation
Three Industry-Standard Feedback Methods

Capture satisfaction and effort data using three globally recognized feedback scoring methods — all available within a single module. Net Promoter Score (NPS) uses a 1-10 scale to measure loyalty and recommendation likelihood. Customer Satisfaction Score (CSAT) uses a simple -1/0/1 scale for instant sentiment capture. Customer Effort Score (CES) uses a 7-point Likert scale from Strongly Disagree to Strongly Agree.
Each feedback method is backed by a configurable feedback model with its own lifecycle management. Choose the right method for each touchpoint — NPS for overall relationship health, CSAT for specific interaction satisfaction, CES for process ease assessment. Compare scores across vessels, departments, and time periods with consistent, standardized metrics.
- NPS (1-10) loyalty measurement
- CSAT (-1/0/1) instant sentiment
- CES 7-point Likert effort scale
- Method-specific analytics & trends
- Configurable feedback model per method
Four Evaluation Subject Types

Evaluate any person or entity in your maritime ecosystem with four distinct subject types — Crew, Company, Contact, and Employee. Crew evaluations assess seafarer competence, safety compliance, and performance during vessel assignments. Company evaluations assess organizational units and departmental performance. Contact evaluations rate external partners, suppliers, and service providers. Employee evaluations cover shore-based staff performance.
Each subject type carries its own context — crew evaluations link to vessel assignments and contract periods, contact evaluations connect to procurement records and service agreements, and employee evaluations align with corporate HR cycles. The system adapts its assessment criteria and scoring context based on who is being evaluated.
- Crew competency & performance reviews
- Company organizational assessments
- Contact (vendor/supplier) evaluations
- Employee shore-based staff ratings
- Subject-specific scoring context
Automated Crew Evaluation Triggers

Never miss a crew assessment again. The system automatically triggers evaluation assignments when crew members sign in or sign out of vessel contracts. Sign-in evaluations capture initial competency baselines; sign-out evaluations assess performance over the entire assignment period.
Automated triggers eliminate the administrative burden of manually scheduling crew assessments for every contract transition. The evaluation assignment is created, linked to the correct crew member and vessel, and routed to the appropriate evaluator — all without manual intervention. Configure which evaluation models are triggered for each event type.
- Sign-in contract event triggers
- Sign-out contract event triggers
- Automatic evaluator assignment
- Model selection per trigger type
- Zero manual scheduling required
On-Demand Ad-Hoc Evaluations

Create evaluations on the spot when circumstances require immediate assessment — from ship or office. Ad-hoc evaluations use the same scoring models and criteria as periodic assessments but are triggered manually rather than by scheduled events or contract transitions.
Use ad-hoc evaluations for incident follow-ups, performance concerns, vendor assessments after specific deliveries, or any situation where a structured evaluation is needed outside the regular cycle. The evaluation type (Periodic vs. Ad-Hoc) is tracked separately, so your analytics can distinguish between scheduled and event-driven assessments.
- Instant evaluation creation from ship
- Office-initiated ad-hoc assessments
- Same scoring models as periodic
- Incident follow-up evaluations
- Periodic vs. Ad-Hoc tracking
Weighted Scoring System

Not every question carries equal importance. The per-question weight factor system lets you assign numerical weights to individual evaluation questions, ensuring that critical competency areas contribute proportionally more to the overall score than supplementary ones.
Weight factors are configured at the model level, so every evaluation using that model applies consistent weighting. The system calculates weighted averages automatically, producing composite scores that accurately reflect your organization's assessment priorities. Compare weighted scores across evaluations to identify true performance trends.
- Per-question numerical weight factors
- Importance-adjusted composite scores
- Consistent weighting across evaluations
- Real-time weighted average calculation
- Cross-evaluation score comparison
Open-Ended Inquiry Questions

Complement scored criteria with qualitative feedback through open-ended inquiry questions. While evaluation questions produce numerical scores, inquiry questions capture free-text observations, recommendations, and contextual notes that numbers alone cannot convey.
Mix inquiry questions freely within any evaluation model alongside scored questions. Use them to capture specific incident details, improvement suggestions, or nuanced observations about performance. Inquiry responses are stored with the evaluation record and included in reports, giving reviewers the full picture behind the scores.
- Free-text qualitative feedback capture
- Mixed with scored questions in models
- Incident detail documentation
- Improvement suggestion collection
- Included in evaluation reports
Model Lifecycle Management

Both evaluation models and feedback models follow a controlled three-stage lifecycle — Draft, Published, and Deprecated. Only Published models can be used for live assessments. Draft models allow iterative refinement before deployment. Deprecated models are preserved for historical reference but cannot be assigned to new evaluations.
The lifecycle prevents unauthorized or untested assessment templates from entering production. Model creators prepare and refine in Draft, reviewers approve for Publishing, and outdated models are gracefully retired through Deprecation — maintaining a clean, controlled library of active assessment instruments.
- Draft → Published → Deprecated stages
- Only Published models assignable
- Evaluation & feedback model support
- Historical model preservation
- Controlled template deployment
Five Evaluator Types

Evaluations can be conducted by five distinct body types — Company, Contact, Ship, Employee, and Crew — capturing every possible assessor-subject relationship in maritime operations. A shore-based company manager can evaluate a crew member, a vessel can assess a supplier, or crew members can participate in peer evaluations.
Each evaluator type carries its own permissions and context. Company evaluators access fleet-wide assessment tools. Ship evaluators work within vessel-specific contexts. Contact evaluators receive external evaluation links for vendor self-assessment. The five-type model ensures every assessment relationship in your organization can be formally captured and tracked.
- Company evaluator assessments
- Contact (external) evaluations
- Ship-based evaluator context
- Employee evaluator assignments
- Crew peer evaluation support
Vendor Evaluation & External Links

Evaluate suppliers, service providers, and third-party contractors with the same rigor as internal personnel. Contact evaluations link to procurement records, audit findings, and service agreements, creating a comprehensive vendor performance history that informs future purchasing decisions.
External evaluation links enable vendor self-assessment — send a secure link to a supplier so they can complete their own evaluation form without needing a Navatom account. Evaluations can be triggered by requisition completions, audit findings, or manual initiation. Build a data-driven vendor scoreboard that replaces subjective opinions with structured, comparable ratings.
- Supplier performance scoring
- External self-assessment links
- Procurement-triggered evaluations
- No Navatom account required
- Historical vendor score tracking
Ship & Office Evaluation Environments

Run evaluations in both ship and office environments with full synchronization between them. Ship-based evaluations capture assessments performed on board — crew competency checks, vessel-specific vendor ratings, and operational performance reviews. Office-based evaluations handle shore-side assessments — corporate HR reviews, fleet-wide vendor scoring, and management evaluations.
The dual-environment architecture ensures that evaluations created at sea are available ashore, and vice versa. Vessel masters can initiate crew evaluations during voyages, while fleet managers can review and compare results across all vessels from the office. Both environments share the same models, scoring methods, and analytics.
- Ship-based crew assessments
- Office-based management reviews
- Full office-ship synchronization
- Shared models across environments
- Offline evaluation capability
Low-Score Follow-Up Triggers

Automatically flag evaluations and feedback responses that fall below configured thresholds. When an NPS score drops below your target, a CSAT response comes back negative, or an evaluation score falls below the competency baseline, the system triggers follow-up actions that ensure poor performance is addressed promptly.
Low-score triggers convert passive data collection into active performance management. Configure threshold rules per model and per scoring method. Triggered follow-ups can initiate re-evaluation assignments, notify managers, or create linked corrective action records — closing the loop between assessment and improvement.
- Configurable score thresholds
- Automatic follow-up assignments
- Manager notification triggers
- Corrective action linkage
- Per-model threshold rules
Comments & File Attachments

Enrich every evaluation and feedback response with contextual comments and supporting file attachments. Evaluators can add narrative comments to individual answers, attach photographs as evidence, upload supporting documents, and provide detailed justifications for their scores.
Attachments transform evaluations from simple score sheets into comprehensive evidence packages. Upload crew certification documents alongside competency assessments, attach delivery photographs to vendor evaluations, or include incident reports with performance reviews. All attachments are stored with the evaluation record and accessible in reports.
- Per-answer narrative comments
- Photo & document uploads
- Evidence-based evaluations
- Certification attachment support
- Included in report packages
Complete Audit Trail

Every action across both evaluation and feedback systems is recorded in a comprehensive event log spanning 36 event types — 22 for evaluation models and 14 for evaluation execution. From model creation and publication through evaluation assignment, scoring, and completion, the event stream provides a tamper-proof narrative of the entire assessment process.
The audit trail powers accountability, compliance reporting, and process analytics. Every event carries full user attribution, timestamp, and contextual data. Reconstruct exactly when an evaluation was assigned, who scored it, when scores were modified, and who approved the final result — fully auditable for any external inspection.
- 36 tracked event types total
- 22 evaluation model event types
- 14 evaluation execution events
- Full user attribution per event
- Immutable tamper-proof history
Evaluation Analytics & Crew Scoring

Transform raw evaluation data into actionable intelligence with dedicated analytics views. Crew score analytics aggregate individual evaluation results across assignments, vessels, and time periods — revealing performance trends, competency gaps, and improvement trajectories that inform training and promotion decisions.
Model performance tracking shows how well your evaluation instruments are working — which criteria produce the most variance, which questions consistently score high or low, and how different evaluator types rate the same subjects. Use analytics to refine your assessment models and ensure they measure what matters.
- Crew score trend analysis
- Cross-vessel performance comparison
- Competency gap identification
- Model effectiveness tracking
- Training & promotion insights
Dashboard & KPI Widgets

Monitor your evaluation and feedback programs at a glance with dedicated dashboard widgets. Track active evaluations, pending assignments, average scores by subject type, NPS trends, CSAT distributions, and completion rates across your fleet — all from a single command center view.
Widgets provide real-time KPIs for fleet managers and HR teams: How many evaluations are pending? What is the fleet-wide NPS score? Which vessels have the lowest crew assessment averages? Drill down from any widget to the underlying evaluation data for detailed analysis.
- Active evaluation count widgets
- NPS & CSAT trend displays
- Pending assignment tracking
- Fleet-wide average scores
- Completion rate monitoring
Multi-Criteria Evaluation Models

Build comprehensive evaluation templates with hierarchical criteria, scored questions, and open-ended inquiries. Each evaluation model organizes assessment dimensions into criteria groups, with individual questions carrying configurable weight factors that determine their influence on the final score.
Design models that reflect your organization's specific competency frameworks — from technical skills and safety awareness to communication and leadership. The weighted scoring system ensures that critical assessment dimensions carry proportionally more influence than supplementary ones, producing results that accurately reflect your priorities.
- Hierarchical criteria with scored questions
- Per-question configurable weight factors
- Open-ended inquiry questions
- Reusable model templates
- Composite weighted score calculation
Three Industry-Standard Feedback Methods

Capture satisfaction and effort data using three globally recognized feedback scoring methods — all available within a single module. Net Promoter Score (NPS) uses a 1-10 scale to measure loyalty and recommendation likelihood. Customer Satisfaction Score (CSAT) uses a simple -1/0/1 scale for instant sentiment capture. Customer Effort Score (CES) uses a 7-point Likert scale from Strongly Disagree to Strongly Agree.
Each feedback method is backed by a configurable feedback model with its own lifecycle management. Choose the right method for each touchpoint — NPS for overall relationship health, CSAT for specific interaction satisfaction, CES for process ease assessment. Compare scores across vessels, departments, and time periods with consistent, standardized metrics.
- NPS (1-10) loyalty measurement
- CSAT (-1/0/1) instant sentiment
- CES 7-point Likert effort scale
- Method-specific analytics & trends
- Configurable feedback model per method
Four Evaluation Subject Types

Evaluate any person or entity in your maritime ecosystem with four distinct subject types — Crew, Company, Contact, and Employee. Crew evaluations assess seafarer competence, safety compliance, and performance during vessel assignments. Company evaluations assess organizational units and departmental performance. Contact evaluations rate external partners, suppliers, and service providers. Employee evaluations cover shore-based staff performance.
Each subject type carries its own context — crew evaluations link to vessel assignments and contract periods, contact evaluations connect to procurement records and service agreements, and employee evaluations align with corporate HR cycles. The system adapts its assessment criteria and scoring context based on who is being evaluated.
- Crew competency & performance reviews
- Company organizational assessments
- Contact (vendor/supplier) evaluations
- Employee shore-based staff ratings
- Subject-specific scoring context
Automated Crew Evaluation Triggers

Never miss a crew assessment again. The system automatically triggers evaluation assignments when crew members sign in or sign out of vessel contracts. Sign-in evaluations capture initial competency baselines; sign-out evaluations assess performance over the entire assignment period.
Automated triggers eliminate the administrative burden of manually scheduling crew assessments for every contract transition. The evaluation assignment is created, linked to the correct crew member and vessel, and routed to the appropriate evaluator — all without manual intervention. Configure which evaluation models are triggered for each event type.
- Sign-in contract event triggers
- Sign-out contract event triggers
- Automatic evaluator assignment
- Model selection per trigger type
- Zero manual scheduling required
On-Demand Ad-Hoc Evaluations

Create evaluations on the spot when circumstances require immediate assessment — from ship or office. Ad-hoc evaluations use the same scoring models and criteria as periodic assessments but are triggered manually rather than by scheduled events or contract transitions.
Use ad-hoc evaluations for incident follow-ups, performance concerns, vendor assessments after specific deliveries, or any situation where a structured evaluation is needed outside the regular cycle. The evaluation type (Periodic vs. Ad-Hoc) is tracked separately, so your analytics can distinguish between scheduled and event-driven assessments.
- Instant evaluation creation from ship
- Office-initiated ad-hoc assessments
- Same scoring models as periodic
- Incident follow-up evaluations
- Periodic vs. Ad-Hoc tracking
Weighted Scoring System

Not every question carries equal importance. The per-question weight factor system lets you assign numerical weights to individual evaluation questions, ensuring that critical competency areas contribute proportionally more to the overall score than supplementary ones.
Weight factors are configured at the model level, so every evaluation using that model applies consistent weighting. The system calculates weighted averages automatically, producing composite scores that accurately reflect your organization's assessment priorities. Compare weighted scores across evaluations to identify true performance trends.
- Per-question numerical weight factors
- Importance-adjusted composite scores
- Consistent weighting across evaluations
- Real-time weighted average calculation
- Cross-evaluation score comparison
Open-Ended Inquiry Questions

Complement scored criteria with qualitative feedback through open-ended inquiry questions. While evaluation questions produce numerical scores, inquiry questions capture free-text observations, recommendations, and contextual notes that numbers alone cannot convey.
Mix inquiry questions freely within any evaluation model alongside scored questions. Use them to capture specific incident details, improvement suggestions, or nuanced observations about performance. Inquiry responses are stored with the evaluation record and included in reports, giving reviewers the full picture behind the scores.
- Free-text qualitative feedback capture
- Mixed with scored questions in models
- Incident detail documentation
- Improvement suggestion collection
- Included in evaluation reports
Model Lifecycle Management

Both evaluation models and feedback models follow a controlled three-stage lifecycle — Draft, Published, and Deprecated. Only Published models can be used for live assessments. Draft models allow iterative refinement before deployment. Deprecated models are preserved for historical reference but cannot be assigned to new evaluations.
The lifecycle prevents unauthorized or untested assessment templates from entering production. Model creators prepare and refine in Draft, reviewers approve for Publishing, and outdated models are gracefully retired through Deprecation — maintaining a clean, controlled library of active assessment instruments.
- Draft → Published → Deprecated stages
- Only Published models assignable
- Evaluation & feedback model support
- Historical model preservation
- Controlled template deployment
Five Evaluator Types

Evaluations can be conducted by five distinct body types — Company, Contact, Ship, Employee, and Crew — capturing every possible assessor-subject relationship in maritime operations. A shore-based company manager can evaluate a crew member, a vessel can assess a supplier, or crew members can participate in peer evaluations.
Each evaluator type carries its own permissions and context. Company evaluators access fleet-wide assessment tools. Ship evaluators work within vessel-specific contexts. Contact evaluators receive external evaluation links for vendor self-assessment. The five-type model ensures every assessment relationship in your organization can be formally captured and tracked.
- Company evaluator assessments
- Contact (external) evaluations
- Ship-based evaluator context
- Employee evaluator assignments
- Crew peer evaluation support
Vendor Evaluation & External Links

Evaluate suppliers, service providers, and third-party contractors with the same rigor as internal personnel. Contact evaluations link to procurement records, audit findings, and service agreements, creating a comprehensive vendor performance history that informs future purchasing decisions.
External evaluation links enable vendor self-assessment — send a secure link to a supplier so they can complete their own evaluation form without needing a Navatom account. Evaluations can be triggered by requisition completions, audit findings, or manual initiation. Build a data-driven vendor scoreboard that replaces subjective opinions with structured, comparable ratings.
- Supplier performance scoring
- External self-assessment links
- Procurement-triggered evaluations
- No Navatom account required
- Historical vendor score tracking
Ship & Office Evaluation Environments

Run evaluations in both ship and office environments with full synchronization between them. Ship-based evaluations capture assessments performed on board — crew competency checks, vessel-specific vendor ratings, and operational performance reviews. Office-based evaluations handle shore-side assessments — corporate HR reviews, fleet-wide vendor scoring, and management evaluations.
The dual-environment architecture ensures that evaluations created at sea are available ashore, and vice versa. Vessel masters can initiate crew evaluations during voyages, while fleet managers can review and compare results across all vessels from the office. Both environments share the same models, scoring methods, and analytics.
- Ship-based crew assessments
- Office-based management reviews
- Full office-ship synchronization
- Shared models across environments
- Offline evaluation capability
Low-Score Follow-Up Triggers

Automatically flag evaluations and feedback responses that fall below configured thresholds. When an NPS score drops below your target, a CSAT response comes back negative, or an evaluation score falls below the competency baseline, the system triggers follow-up actions that ensure poor performance is addressed promptly.
Low-score triggers convert passive data collection into active performance management. Configure threshold rules per model and per scoring method. Triggered follow-ups can initiate re-evaluation assignments, notify managers, or create linked corrective action records — closing the loop between assessment and improvement.
- Configurable score thresholds
- Automatic follow-up assignments
- Manager notification triggers
- Corrective action linkage
- Per-model threshold rules
Comments & File Attachments

Enrich every evaluation and feedback response with contextual comments and supporting file attachments. Evaluators can add narrative comments to individual answers, attach photographs as evidence, upload supporting documents, and provide detailed justifications for their scores.
Attachments transform evaluations from simple score sheets into comprehensive evidence packages. Upload crew certification documents alongside competency assessments, attach delivery photographs to vendor evaluations, or include incident reports with performance reviews. All attachments are stored with the evaluation record and accessible in reports.
- Per-answer narrative comments
- Photo & document uploads
- Evidence-based evaluations
- Certification attachment support
- Included in report packages
Complete Audit Trail

Every action across both evaluation and feedback systems is recorded in a comprehensive event log spanning 36 event types — 22 for evaluation models and 14 for evaluation execution. From model creation and publication through evaluation assignment, scoring, and completion, the event stream provides a tamper-proof narrative of the entire assessment process.
The audit trail powers accountability, compliance reporting, and process analytics. Every event carries full user attribution, timestamp, and contextual data. Reconstruct exactly when an evaluation was assigned, who scored it, when scores were modified, and who approved the final result — fully auditable for any external inspection.
- 36 tracked event types total
- 22 evaluation model event types
- 14 evaluation execution events
- Full user attribution per event
- Immutable tamper-proof history
Evaluation Analytics & Crew Scoring

Transform raw evaluation data into actionable intelligence with dedicated analytics views. Crew score analytics aggregate individual evaluation results across assignments, vessels, and time periods — revealing performance trends, competency gaps, and improvement trajectories that inform training and promotion decisions.
Model performance tracking shows how well your evaluation instruments are working — which criteria produce the most variance, which questions consistently score high or low, and how different evaluator types rate the same subjects. Use analytics to refine your assessment models and ensure they measure what matters.
- Crew score trend analysis
- Cross-vessel performance comparison
- Competency gap identification
- Model effectiveness tracking
- Training & promotion insights
Dashboard & KPI Widgets

Monitor your evaluation and feedback programs at a glance with dedicated dashboard widgets. Track active evaluations, pending assignments, average scores by subject type, NPS trends, CSAT distributions, and completion rates across your fleet — all from a single command center view.
Widgets provide real-time KPIs for fleet managers and HR teams: How many evaluations are pending? What is the fleet-wide NPS score? Which vessels have the lowest crew assessment averages? Drill down from any widget to the underlying evaluation data for detailed analysis.
- Active evaluation count widgets
- NPS & CSAT trend displays
- Pending assignment tracking
- Fleet-wide average scores
- Completion rate monitoring
Technical
Under the Hood
The architecture and engineering capabilities behind Navatom Evaluation & Feedback, from data handling and real-time sync to user interface design.
Dual Assessment Engine Architecture
Evaluations and Feedback operate as two distinct but integrated subsystems within a single module. Evaluations handle multi-criteria weighted scoring with hierarchical criteria and questions.
Feedback handles rapid single-metric capture (NPS, CSAT, CES). Both systems share model lifecycle management, event tracking, and analytics infrastructure.
Weighted Score Calculation Engine
Per-question weight factors produce importance-adjusted composite scores automatically. The scoring engine handles variable criteria counts, missing answers, and different scale types while maintaining mathematical accuracy.
Weighted averages are computed in real time as evaluators complete assessments.
Automated Assignment Triggers
Crew evaluation assignments are created automatically on contract sign-in and sign-out events. The trigger system monitors crew contract lifecycle transitions, selects the appropriate evaluation model, assigns the evaluator, and routes the evaluation — all without manual intervention.
Controlled Model Lifecycle
Both evaluation and feedback models follow a Draft, Published, Deprecated lifecycle with built-in gates. Only Published models can be assigned to live assessments.
The lifecycle prevents untested templates from entering production and preserves deprecated models for historical traceability.
Tamper-Proof Event Logging
Every action across evaluations and feedback is recorded across 36 event types — 22 for evaluation models and 14 for evaluation execution. The immutable event log powers audit trails, compliance reporting, analytics, and real-time notifications with full user attribution.
External Evaluation Link System
Vendor self-assessment is enabled through secure external evaluation links. External contacts complete evaluation forms without requiring a Navatom account.
Responses are captured, validated, and integrated into the same scoring and analytics pipeline as internal evaluations.
Office-Ship Synchronization
Evaluation and feedback data synchronizes between office and vessel over satellite links with automatic conflict resolution. Vessels can conduct crew assessments and capture feedback offline, with changes merging seamlessly when connectivity is restored.
Three-Method Feedback Scoring
The feedback engine supports NPS (1-10 numeric), CSAT (-1/0/1 ternary), and CES (7-point Likert from Strongly Disagree to Strongly Agree) as first-class scoring methods. Each method has its own input interface, calculation logic, and analytics aggregation.
Ready to try Evaluation & Feedback?
Start your free trial today and see how Evaluation & Feedback fits into your fleet operations.