Railways

Data Warehouse for Rail Defectoscopy

Railways of the Slovak Republic ordered a diagnostic vehicle for non-destructive testing from a consortium of companies, which delivered the vehicle, measuring technologies, and the software to evaluate all the measured data. For this consortium, we created a warehouse and an analytical tool that displays the data in a user-friendly portal.

Core competencies

Data analytics
Data visualization
Data warehouse
High data volume processing
Microservices

Technologies

Java
Spring Stack
Vue
REST

Problem definition and goal

There are three situations when a user needs to aggregate a lot of data to find the answer about the potential impact on customers and their products.  

  • In case of planned network element upgrade, decommissioning, or any change which might affect its functionality
  • In the case of device outage identified by operation teams and monitoring tools
  • In case of customer complains about service malfunction

In all of these situations, many users had to collaborate, access many tools, and compile the results to find the corresponding answer. This process is error–prone and in more complicated cases (e.g. decommissioning of metropolitan optical cable) might take two weeks to finish.  

Challenges

We had to face the same challenges as in the CELINE project. This project moves the requirement even further and thus a new set of challenges in addition to those from CELINE.

Find relations across huge graphs in real-time

If you want to calculate all services which rely for example on a given metropolitan optical cable, you not only need high quality and strongly correlated data but you will also need to inspect large parts of the network both vertically and horizontally. The problem with such reports is also that you don’t know how “deep” you need to search.

Complete the missing data

Although successfully correlating data from all existing inventory systems and all network management systems in Orange Slovakia, there still were gaps which we had to fill to have the full picture of the network. For example, no data source would describe service architecture in the detail of particular network elements.

Impact visualization

The requirements were to provide a list of services and customers impacted by network element outages. This list is used for example to inform customers about planned outages. Even if we would have had the list, then the question of “how could the user validate that the list is correct?” would arise. Another question we had to keep in mind was “how could the user understand what is the chain of the outage which results in an impact on this particular customer?”.

Solutions

Luckily, we had already a production system in place and a lot of experience in the domain of telecommunication inventory integration from our previous project CELINE. We decided to reuse our solutions and code. Also, we further used this opportunity to modernize our code stack, refactor important parts which would later become more optimized, and created a better product for bot CELINE’s and SIA’s use-case.

SIA – Find relations across huge graphs in real-time ​

Find relations across huge graphs in real-time ​

We have decided to continue using our standard relational database as our main persistent storage. However, to be able to search in the huge graphs in real-time, we have introduced a specialized graph database, NEO4J, which greatly enhanced our possibilities in performing analyses of the telecommunication network. The data in NEO4j are synchronized from our main persistent database and are de-normalized and enhanced to further speed-up the network impact analyses.

Complete the missing data

We had to adjust the implementation of our tool to not only show and analyze data integrated from other systems but also to allow the user to edit and further enhance this data. Part of the data which was missing is service architected which naturally has a form of a diagram. We have designed a specialized DSL (domain-specific language) that allows creating such a diagram very effectively from underlying inventory objects.

SIA – Complete the missing data
SIA – Impact visualization

Impact visualization

To give the user more information about the impact characteristics or reasons why the tool included some customers in the impact report we had to design specialized views on the impact data. For example, users can choose to visualize the impact on a map to see how the impact of one site outage will spread in the geographic space. Besides, users can visualize the full path from the specific customer devices to the network elements used by all customers for the given service.

SIA facts

In production since

2021

Data sources integrated

8

Inventory objects

Close to 3 mil

Users

650+

Problem definition and goal

CELINE was designed for GNOC (Global network operation center). GNOC is responsible for the maintenance of Orange affiliates network infrastructure. To achieve this goal, access to information about the network’s infrastructure is paramount.

The original approach for GNOC with regards to inventory management was to access whatever tools are available in the country for which they provide services. It was clear early on that working with different systems from huge inventory solutions like NetCracker to Excel spreadsheets was not efficient. Therefore, it was identified that there should be a single inventory solution for GNOC that integrates all data sources available. One could describe the situation as following:

Before Celine

Before Celine

After Celine

After Celine

Challenges

OBJECTIFY has been part of the CELINE project from the very beginning. Our first contribution to the project was to define the problem faced by the GNOC: accessing information scattered across many different data sources.Soon it was clear that we would have many significant challenges to face:

Data diversity

The applications storing the network infrastructure data were hugely different from each other in terms of APIs, level of detail, and data models. Our goal was to create a unified view of the data for the user. From the user perspective, the representation of a server or router (as an example) should be the same independent, if the original data came from NMS’s (Network Management Systems), Excel Spreadsheets, or Inventory tool. 

Data synchronization

It was not possible to replace all the tools used by Orange affiliates because they’re part of the day-to-day work of local engineers and integrated with other specific IT solutions.

Information instead of data

Even if the data from the various data sources will be in the same application, to provide real benefit, these data need to be correlated to each other, they need to be aggregated into user reports and accessible via standardized APIs for further automation and reporting tools.

Data quality assurance

It is common for inventory systems to have data that is out of date. The problem becomes even bigger when 3rd party companies, with no management authority (GNOC), need to initiate data cleaning or update stale data.

There are significant project management issues related to projects with so many participants but, this goes beyond the scope of this case-study.

Solutions

OBJECTIFY was active in the telecommunication inventory management domain long before this project. Through our experience, we gained valuable expertise without which we could never have delivered these solutions:

Data Diversity

We have created an inventory data model starting with very generic terminology to more specific types. This model could be extended and can document any inventory situation we might encounter in the data sources, which were not even identified during that time. The result was “Common data model” that we have described and tested in detail.

Celine – Data synchronizations

Data synchronizations

We have created specialized data pumps to collect and transform data to the “Common data model” and send the aggregated view after each synchronization to a centralized inventory in Poland. These data pumps allow custom connections and transformations for each data source. All the downstream calculations, transformations, and algorithms are generic and independent of the data source.

Data quality assurance

We always need to make sure the data presented to the user is correct. We have implemented mechanisms to identify data inconsistencies like reference integrity and data invariants. We have also allowed users to override data coming from a source system while making sure that such corrections are consistently visible across the whole application (the original value from the data source and the corrected value by GNOC).  

Celine – Data quality assurance
DMS - Search

Information instead of data

Having the data does not mean that the user receives all the information they require. To solve this issue, we have put a lot of emphasis on search capabilities across all inventory objects and attributes. Part of the CELINE solution is integration with our own DMS tool that provides storage for unstructured data (work instructions, manuals, common problems, and solutions) and integrates this information into CELINE.We have also designed and implemented many custom reports which are tailor-made for the daily information requirements of GNOC engineers. These reports allow an aggregated view of data from various sources.

Celine facts

Project age

6 years

Countries

4

Data sources

14

Inventory objects

1 mil

Users

360+
Orange Slovakia has chosen the CELINE platform as the core for its next inventory-related project, “Service impact analysis” which you can check here.

Problem definition and goal

Part of building network infrastructure for telecommunication operators is creating a lot of different documents like lease contracts, CAD drawings, measurements, installation materials, and so on. When the telecommunication site is operational, other activities like site revisions are performed which generate further documentation.

All these documents were previously stored on a single shared drive and as the volume of documents grew, it became harder and harder to find the right ones. To work around this problem, users started to create their own folder hierarchies, which meant duplicating some of the documents. The shared drive also had other problems, like lack of full-text search, no traceability, poor user rights management, and so on, which needed to be solved.

Challenges

This project involved a large amount of unstructured data, which needed to be categorized, so that users could find what they need no matter what role in the organization they play. The documents were created and used by internal Orange employees as well as users from external companies.

Document organization

It was obvious from analyzing the documents stored on the shared drive that users have different “views” on document categorization. For some users, categorization was “Site-centric” whereas other users categorized based on “Time-centric” priorities. This resulted in different folder hierarchies and document fragmentation.

Structured and unstructured data

Although most data have been stored in documents, some information was stored in the folder structures themselves. Document type, site number, year, and much more were encoded in folder names that prevented proper use of this information.

Duplication and quality assurance

It often happened that users missed some mandatory folders that resulted in wrong document categorization or uploading a document, which was already there.

Search

The primary concern of a shared drive document storage system was poor usability when searching for documents. It was not possible to use full-text search, and it was hard to navigate via folder hierarchies and resolve duplicates.

Security and trackability

If users moved documents to another location or accidentally deleted documents, it was not possible to trace which user it was. It was also not easy to protect documents from unauthorized access.

Data import

There was already a large set of documents stored in the shared drive, which had to be loaded to DMS without losing information about its categorization.

Data to information

User have no easy option to get answer for a simple question: How many sites exists, which do not have electric revision for the year 2020.

Solutions

At OBJECTIFY, we created a web-based DMS system, which was not organized around folders but instead allowed to define custom fields on documents that could be used to store different meta-data as well as for searching.

DMS - Document organization and structured data​

Document organization and structured data​

Using folders to categorize documents have some major drawbacks; users must agree on a single hierarchy of folders and assign documents accordingly. Additionally, folders do not give any information about the data it contains. That is why we decided not to use folders at all. In DMS, all structured information (Project, Creation date, Document type, anything you like) is stored in a specific document attribute that allows for advanced filtering, reporting on documents, validations, permission rules based on attribute values, and more.

Preventing duplication and wrong categorization

Users can define document types and their attributes. They can also customize which fields are mandatory. To prevent users from duplicating content, we have implemented a search for identifying similarities in content and alerting the user of results found. Additionally, users can define “Upload trees” to easily upload different types of documents for a given scenario. Users can also configure rules to include structured information from folders being uploaded.

DMS – Preventing duplication and wrong categorization
DMS - Search

Search

We use Apache Solr to index documents and search full-text with word occurrence highlighting typeahead features, or spellcheck corrections. Further, we implemented a faceted search so that users can easily narrow down Full-text search results. The faceted search also allows for hierarchy independent search order; “Site-centric” users can start by filtering via Site, and “Time-centric” users filter via a data range. 

Security and trackability

We have implemented a security mechanism to DMS that can be finetuned to the level of single documents. Users can specify access rights individually for each document or configure rules based on document field values. Furthermore, we keep a detailed log of activity on each document, so no user action gets lost.

Data import

To be able to migrate data to DMS, we have implemented a stand-alone migration tool with a configurable rule engine, which not only migrates documents but can synchronize documents from various data sources (shared drive, share point, database, …) This way DMS can be used not only as document storage but also as a search engine for other applications.

Data information

Having all structured data stored in dedicated fields allowed us to create a reporting engine, where users can create tabular reports to follow the progress of installations, document statistics, and much more.

DMS facts

Project age

6 years

Installations

Orange Slovakia,
GNOC Africa

#N of documents

1 million+

Users

250+

Problem definition and goal

Most manufacturers of photovoltaic components primarily showcase their products on their respective websites. However, when users seek to compare and evaluate items from multiple manufacturers in a standardized manner, there are few public portals that facilitate such comparisons. A key objective of this project was to present photovoltaic components from various regions or manufacturers in a cohesive manner.

Equally crucial is the provision of an API for third-party applications, enabling them to utilize portal data for their own purposes and better serve their customers. For instance, this API could aid in evaluating different product types for designing a photovoltaic power plant.

Implementation challenges

  • Serverless architecture on AWS
  • Faceted navigation using only PostgreSQL (without usage of ElasticSearch or Solr)
  • Complexmathematical validations

Customer

Our customer is a leading provider of photovoltaic software applications and web-based solutions for the optimization of construction, evaluation, and management of solar power assets.

Core features

  • Registration of PV components, including support for multiple versions of a component
  • Validations and calculations of component attributes
  • Searching capabilities (full-text search, faceted search)
  • Comparisons of components
  • Rest API for third-party applications
  • Exports and imports of components in commonly used formats
DMS - Document organization and structured data​

Catalog of components

The application enables users to display a list of items that can be filtered based on various criteria such as manufacturer name, mechanical or electrical characteristics, and more. Additionally, full-text search and faceted search functionalities are provided to facilitate convenient searching on the portal for users.

The application is accessible to the public.

go to pv components catalog

Statistics

In addition to basic catalog display functionality, the application offers screens for supervisory workers, allowing them to access statistical information about the collection of items displayed in the catalog or utilized by third-party companies. Furthermore, providing statistical insights into the usage of the API by selected companies is a standard feature of the portal.

DMS – Preventing duplication and wrong categorization
DMS - Search

Administration

The administration section of the portal allows for comprehensive management of users and companies. It enables the registration and management of user accounts, oversight of company registrations, and verification of component data accuracy and completeness. Additionally, it includes tools to monitor activities related to the usage of the API by third-party applications

Tech stack

Frontend

  • Angular
  • RxJS
  • Transloco
  • Custom UI component library
  • Various JavaScript/TypeScript frontend libraries

Backend

  • Node.js
  • TypeScript
  • AWS
    • API Gateway, Lambda
    • RDS, RDS Proxy, S3
    • SNS, SES
    • Cognito
    • VPC
  • Middy, Zod

Problem definition and goal

A medical guideline is a document with the aim of guiding decisions and criteria regarding diagnosis, management, and treatment in specific areas of healthcare. These guidelines are updated regularly and are often in the form of “free text” documents.

A healthcare provider is obliged to know the medical guidelines of his or her profession and must decide whether to follow the recommendations of a guideline for individual treatment. It is important to find a way to promote the newest guidelines and draw attention to possible diverges between the guidelines and widespread practice.

Prof. Dr. Paul Martin Putora proposed a method to document the decision-making process in a structured way. He used the decision tree notation that condensed the information in a very efficient and readable way and also allowed to compare the decision-making process between different healthcare providers.

Challenges

Terminology unification

The treatment decisions are based on parameters which can have different names in different hospitals. Even when the different hospitals use the same names for same parameters, they might express parameter values in different units; for example, glucose can be measured in mmol/l as well as mg/dl. To perform calculations, this terminology had to be unified or at least be mappable.

Comparison trees

The decision trees can be documented in diverse ways. Some prefer to use complex mathematical expressions to define when a given action is performed when others use simple logical expressions. Some medical centers can omit certain parameters because the measurements for that parameter have not been taken. To find differences in such complicated structures was not trivial.

Huge state-space

The decision-making comparison has been performed in studies where more than 10 hospitals took part. One treatment can depend on 10+ parameters. To evaluate such state-space, we can easily end up with billions of combinations that need to be evaluated.

Data presentation

The differences between treatments can be quite profound. It is complicated to get the right insights from the results unless users investigate the result in detail.

Solutions

Dodes - Terminology unification

Terminology unification

Before the decision tree for a particular treatment can be created the terminology has to be unified in the treatment template. This template defines a vocabulary for decision trees in terms of the parameters that must be considered in the treatment as well as actions that can be performed. When the user is creating a decision tree, the system is using the template to provide guidance and ease the process of input.

Comparison trees

We have implemented algorithms that can transform the decision tree into a multidimensional state-space and can assign each “coordinate” a set of actions for a given parameter range. This allows us to compare each coordinate to find differences. From this information, we can back generate a decision tree that represents comparison results. We can use the same approach to enhanced the validation of decision trees like calculate parameter ranges that do not have any action or have contradictory conditions.

Dodes - Comparison trees

Huge state-space

To compute large decision trees from many institutions, we have implemented parallelized computation, using the google task queue. The computational state-space is divided into many smaller spaces that are sent to the task queue. This service spans as many computers as needed and performs the computation in parallel, returning results that are merged into a single comparison result. This way, we can perform comparison calculations in near real-time.

Data presentation

To reduce comparison results, users can use filters actions, or parameter values and thus reduce the state-space to much smaller sizes. We are still investigating options to make the comparison result more compressed and readable. Currently, we are experimenting using correlation matrixes.

Awards

Dodes have been used in many studies, some of which have been funded by the biggest pharma companies like Roche and Genzyme.

The results of these studies were published in Neurosurgery, in an article called “Patterns of Care and Clinical Decision Making for Recurrent Glioblastoma Multiforme” which you can read here: https://academic.oup.com/neurosurgery/article/78/2/N12/2453749.

The reviewers are stating: “The authors are to be congratulated for identifying core clinical decision-making criteria that may be useful in future studies of recurrent GBM. This decision tree is an excellent reference for clinical trial development, and several active clinical trials already target the DODEs identified in this study.”

This especially credits the authors of the study around Prof. Dr. Paul Martin Putora but also proves the quality of our tool and its usability in discovering optimal treatment strategies.

Description

We have implemented a solution for capturing biometric data using libraries and SDKs created by Innovatrics. The portal solution offers features that can be used for civil identification or criminal investigation. The application allows the usage of various devices to capture fingerprints or iris scans. Additionally, a wide range of camera types is supported.

Customer

Innovatrics is an independent EU-based provider of trusted biometric solutions for governments and enterprises.

Core features

  • Face detection and facial recognition
  • Iris scans
  • Plain and rolled fingerprint scans
  • Palm scans
  • Handwritten signatures

Solution

All the modalities that are captured from applicants are configurable. It means a customer can decide for a deployment just with support e.g. of face detection and plain fingerprint scans or for a deployment with all-features support.

Face detection and subsequent facial recognition are the core features of the enrollment portal. Iris scans are also one of the common features offered by the application.

Scanning of irisesCaptured scan of irises

Fingerprint scans

The application supports both plain and rolled fingerprint scans. With the plaintype, users can switch between slaps mode and single finger scans.

Scanning of a right slapCaptured fingerprint scans
Scanning of irisesScanning of rolled fingerprints

Applicant identification

In the final step the system utilizes biometric recognition technologies to identify the applicant. Leveraging facial recognition, iris scans, and fingerprint data, the application employs sophisticated algorithms to match the captured biometric information against existing records in its database. By employing these robust biometric modalities, the application ensures a highlevel of accuracy and security in the identification process, facilitating reliable decision-making in various scenarios, from civil identification to criminal investigation.

Problem definition and goal

Our customer had existing commissioning management software accessible via a web browser as a common web application or as an iOS app available in the App Store. They needed an Android application that would provide identical functionality to the iOS app.

Customer

CxAlloy provides commissioning management software and facility management software to customers in North America.

Core features

  • Multiple project support
  • Automatic synchronization with server
  • Usage in offline mode
  • Creation and modification of field observations
  • Quality verification using checklists and tests
  • Issue management
  • Asset tracking

Solution

The application allows users to manage multiple projects. It ensures synchronization of data changes between the client and the application server. Multiple users can work on the same project simultaneously across various platforms (iOS, web, Android). Advanced algorithms on the server side ensure the integrity and correctness of the data when the same project is used in parallel by different users.


Various files and file types (e.g., pictures or PDFs) can be attached to each project. Additionally, users can add annotations to these files, which are displayed in a layer above the file. This annotation layer can be modified as needed.

Captured scan of irises
Captured scan of irises
Captured scan of irises
Captured scan of irises
Captured scan of irises
Captured scan of irises

Field observations allow users to document their progress regarding onsite construction activities. These observations can be utilized by different project roles, including architects, engineers or consultants. Field observations management also includes the ability to create and modify comments, with options for attaching pictures.


Issue tracking

The application enables efficient collaboration on design and construction issues, which can originate from various sources such as field observations or tests. Issues can include deficiencies, questions or comments. A flow between various states of an issue is supported by the application, which can result in the creation of additional tasks or the assignment of an issue to different persons. The application contains also issue templates, which simplifies the creation of new issues.

Challenge

Lots of companies offer open positions to software developers. We had to distinguish and create a product, which will be unique. At the same time, it was important to build trust between the company and developers.

Our Goal

The main goal of Koderia (formerly campaigned as “Developers for Developers”) was to increase the amount of email subscribers for our open positions at Objectify.

The beginning of the multi-functional portal

Started as a simple website with a simple form, where developers were able to find out how much they should earn (Adequate salary calculator). We had a proper knowledge of the economic situation in the industry so we were able to deliver personal, quick and adequate responses. The campaign was popular among developers and gained hundreds of new contacts. At that point, we decided to create an online space that will merge all important information about the industry into a one place.

Moving to Vue and Firestore

As Koderia grew, it became apparent that WordPress simply won’t be enough.

The decision to go with Firebase proved to be a beneficial one. Firebase offered us many out-of-the-box customizable features such as:

  • Realtime Database was used until more querying possibilities were needed, while Firestore provides everything we needed since and probably will ever need concerning the Koderia project.
  • Cloud Functions proved to be a fitting replacement for a standard backend.
  • Firebase Authentication was a pleasant feature to use and implement. Providers such as Google, Facebook, Github (and also standard email + password) were implemented with ease.
  • Firebase CI/CD also offered us an easy-to-use deployment solution that we have set up and used since our first release.

Graphs, visualizations, and much more can now be found at Koderia. We pay many thanks to chart.js. Integrating this library along with many other both external and custom-made libraries was effortless.

Key features of Koderia today

Koderia CV

Koderia CV

Most advanced feature of Koderia is a unique CV developed for software engineers. Design of the CV is adjusted for the software industry and helps to communicate information in an easier and tidier way, than most of the other CV service available. Developers are then able to apply for open positions on Koderia, or anywhere else. While building this feature our main aspects of the product were:

Responsivity

The CV had to be easily readable on all devices and print

Language selector

Users have to be able select languages

Security

For more privacy, users are able to protect their CV with a password

LinkedIn Support

Transfer information easily from LinkedIn

Skill graph

The main point that differentiates Koderia CV from any other, is a radar chart, that shows how much a person is oriented on Fronted, Backend, Database, Dev/Ops, or Styling. This helps to understand all the skills, education, and experience with a blink of an eye.People that created their CVs are able to react to open positions more quickly and also see whether the position suits their skills and experience or not.

Koderia Payroll Calculator

Payroll calculator

Payroll calculator let users calculate their income based on type of their employment. The feature brigs rich visual representation of all the expenses and taxes. The main goal is to educate and explain, what type of employment is better in their situation. Many Slovak companies, especially in IT industry, offers contract instead of permanent employment. As the salary varies in these cases, the user is informed about all the differences.

To learn more visit koderia.sk/mzdova-kalkulacka

Original content

Koderia creates a place, where users are always able to find new and fresh information related to the software industry. This is made of sections that deliver dynamic content:

Posts

3rd party articles and videos, carefully selected by us

Events

One place for all the offline and online IT events in Slovakia

Blogs

Space for our thoughts and anyone who wants to share their ideas among our audience

Jobs

Our primary content lets users find their next profession in IT industry

Koderia kept the first feature – The adequate salary calculator, which is now also integrated into CV. There is no need to fill another form. When the CV is created, the adequate salary is calculated too.

Throughout the time the project is online, Koderia established its place in Slovakia and plans to continue with the support for developers, bringing new features and improve existing ones.

Problem definition and goal

Measurement systems produce a huge amount of data, which needs to be transferred to a datastore and later cleaned and transformed into a format understandable to a common user. Due to the size of the database, which is estimated to be ca. 5TB, the data must be transformed into a format that is optimized for future display on the portal. The application must provide a user interface that displays the requested data with fast response times and is able to present the measured data on an underlying map.

Challenge

The project involves a large volume of measured data that needs to be transferred from a locomotive to a server, then cleaned and transformed into a format that enables later data analysis and provides short response times for the presentation layer.

Cooperation of multiple vendors

One of the challenges was also to arrange cooperation among several companies, including multiple vendors of measuring systems and multiple vendors of software solutions.

About the customer

Railways of the Slovak Republic manages the state-owned railway infrastructure on the territory of the Slovak Republic and at the same time provides also transportation services.

Solution

The data warehouse consolidates vast amounts of information collected from measurement systems installed on the locomotive. This warehouse efficiently stores, cleans, and transforms raw measurement data into a structured format, enabling comprehensive analysis and reporting. It supports advanced data analytics, facilitating the identification of rail defects, trends and patterns over time.

DMS - Document organization and structured data​

Campaigns

The measurements of selected track sections are planned in advance as part of a measurement campaign. When creating a campaign, a user has to define the entire route plan for the diagnostic vehicle, including details regarding the vehicle's speed, types of diagnostics, and scheduled stops at stations.

After completing a campaign, a user can view and analyze the collected data. It is possible to download the original files produced by the measuring systems, view pictures or videos from cameras, and locate the defects on a map.


DMS – Preventing duplication and wrong categorization
DMS - Document organization and structured data​

Campaigns and all associated files have their own lifecycle, which helps users to analyze the results and plan subsequent repairs.

Dashboard

The application features a comprehensive dashboard with interactive tiles, allowing users to easily navigate to various screens such as reports, campaigns or files. Each tile provides quick access to different functionalities, ensuring an intuitive and efficient user experience. The dashboard serves as the central hub, offering an overview of key information and functionalities. Users can access detailed reports, manage campaign data, and navigate to all other screens directly from the dashboard.

DMS – Preventing duplication and wrong categorization

Reports

The application includes a variety of reports to support comprehensive data analysis and decision making. These reports encompass outputs from measurements, detailing precise data collected during campaigns. Error reports highlight any anomalies or issues detected in the data. Sectional assessment reports provide an evaluation of specific track sections, while status of measurement reports offer real-time updates on the progress and results of ongoing measurements. Additionally, frontal photo reports present visual documentation from cameras, aiding in the identification and verification of defects.
Each report is designed to provide clear and actionable insights, enhancing the overall efficiency and effectiveness of the rail maintenance process.

DMS - Document organization and structured data​DMS - Document organization and structured data​