Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The role of an administrator is crucial for managing IT systems, networks and digital platforms in an organization. An administrator has advanced permissions and responsibilities that allow them to control various aspects of the technical infrastructure and ensure that it is operated efficiently and securely. Here are some of the main responsibilities of an administrator:
User management: Administrators manage user accounts, access rights and permissions. They create new user accounts, assign them the necessary permissions and manage access control to ensure that only authorized users can access certain resources.
Security: Administrators are responsible for the security of IT systems to protect against data loss and unauthorized access.
Troubleshooting and support: The administrator is often the first point of contact for technical issues. They help users troubleshoot and resolve problems and ensure that the system is running smoothly.
In addition to these responsibilities, administrators are also tasked with managing sensitive settings and ensuring that systems meet compliance requirements and information security best practices. This includes managing sensitive data, configuring access controls and permissions, and monitoring and analyzing system logs to identify and address potential security risks.
Security is an essential aspect of any organization, especially when it comes to managing user accounts and access rights. Here are some best practices to maintain a secure user management protocol:
Regular password updates: Encourage users to update their passwords regularly to keep their accounts secure. Establish password complexity policies and require the use of strong passwords that include a combination of letters, numbers, and special characters.
Monitor administrator actions: Implement mechanisms to monitor administrator activities to detect suspicious or unusual activity. Log all administrator actions, including access to sensitive data or settings, to ensure accountability and identify potential security breaches.
Limit the number of administrators: Reduce the number of administrators to a minimum and grant administrative privileges only to those who really need them. By limiting the number of administrators, you minimize the risk of security breaches and make it easier to manage and monitor user accounts.
Zwei-Faktor-Authentifizierung (2FA): Implementieren Sie eine Zwei-Faktor-Authentifizierung für Administratorkonten, um die Sicherheit zusätzlich zu erhöhen. Dadurch wird ein zusätzlicher Sicherheitsschritt eingeführt, der sicherstellt, dass selbst bei Kompromittierung eines Kennworts ein Angreifer keinen unbefugten Zugriff auf das Konto erhält.
Regelmäßige Sicherheitsüberprüfungen: Führen Sie regelmäßige Sicherheitsüberprüfungen und Audits durch, um potenzielle Sicherheitslücken oder Schwachstellen zu identifizieren und zu beheben. Überprüfen Sie die Zugriffsrechte und Berechtigungen von Benutzerkonten, um sicherzustellen, dass sie den aktuellen Anforderungen und Best Practices entsprechen.
Training and awareness: Regularly train employees and administrators on security best practices and awareness of phishing attacks and other cyber threats. Make them aware of the importance of security and encourage them to report suspicious activity.
By implementing these best practices, organizations can improve the security of their user management protocol and minimize the risk of security breaches and data loss. It is important to view security as an ongoing process and make regular updates and adjustments to keep up with ever-changing threats and security requirements.
Groups and permissions allow administrators to control access to sensitive data and resources.
By assigning permissions at the group level, administrators can exercise granular control over who can access what information.
This helps prevent unauthorized access and data leaks.
The Least Privilege Principle states that users should only be given the permissions they need to perform their jobs.
By using groups, administrators can efficiently manage permissions and ensure that users can only access the resources relevant to their particular role or department.
Using groups makes it easier to manage user permissions in large organizations with many users and resources.
Instead of setting permissions individually for each user, administrators can assign permissions at the group level, simplifying administration and reducing administrative overhead.
Using groups and permissions allows organizations to monitor access activities and meet compliance requirements.
Logging access events allows administrators to perform audits to ensure that permissions are properly managed and that unauthorized access is not occurring.
Groups and permissions provide flexibility to adapt to changing needs and organizational structures.
Administrators can create groups based on departments, teams, projects, or other criteria and dynamically adjust permissions to ensure users always have access to the resources they need.
Overall, groups and permissions play a key role in increasing the security of IT infrastructure, improving operational efficiency, and ensuring compliance with policies and regulations. By managing groups and permissions wisely, organizations can effectively protect their data and resources while promoting employee productivity.
In a company with different departments such as finance, marketing, and human resources, employees in each department will need access to different types of documents and resources.
For example, the finance team needs access to financial reports and invoices, while HR needs access to employee data and payroll.
In a healthcare organization, different groups of employees will need different levels of access to patient data.
For example, doctors and nurses will need access to medical records and patient histories, while administrative staff may only need access to billing data and scheduling.
In an educational institution such as a university, different groups of users will need different levels of access to educational resources.
For example, professors will need access to course materials and grades, while administrators will need access to financial data and student information.
Companies may need to implement different levels of access to meet legal and compliance requirements.
For example, financial institutions may need to ensure that only authorized employees have access to sensitive financial data to ensure compliance with regulations such as the General Data Protection Regulation (GDPR).
In a project team, different members may require different levels of access to project materials.
For example, project managers may need access to all project resources, while external consultants may only have access to certain parts of the project.
In these scenarios, it is important that access levels are defined according to the roles and responsibilities of user groups to ensure the security of data while improving employee efficiency. By implementing role-based access control, organizations can ensure that users can only access the resources required for their respective function.
The App Color setting allows administrators to define the color scheme of the application interface. This feature is particularly useful for distinguishing between different environments such as testing, sandbox, and production. By assigning distinct colors to each environment, users can easily identify which environment they are working in, reducing the risk of performing critical actions in the wrong environment.
Navigate to Company Settings:
From the main menu, click on the Company Information section.
Locate the App Color Section:
Scroll down to the App Color section within the Company Information settings.
Choose a Color:
Click on the color box or enter a hex code directly into the text field.
A color picker will appear, allowing you to select the desired color.
You can enter a specific hex code if you have a predetermined color for the environment.
Save the Color:
Once you have selected the color, click on the Save button to apply the change.
The application interface will immediately update to reflect the new color.
Reset to Default:
If you wish to revert to the default color, click the Reset button.
To avoid confusion, it is recommended to establish a standard color scheme for each environment:
Production: Use a neutral or default color, such as #FFFFFF (white) or #f0f0f0 (light grey), to indicate the live environment.
Testing: Use a bright or alerting color, such as #ffcc00 (yellow) or #ffa500 (orange), to indicate a testing environment.
Sandbox: Use a distinct color, such as #007bff (blue) or #6c757d (grey), to indicate a sandbox or development environment.
Under the App Color section, administrators will also see information related to the Subscription Plan. This includes the current plan, its status, and the remaining days of subscription.
The App Color setting is a simple yet effective tool to help users quickly recognize the environment they are working in. By carefully selecting and managing these colors, organizations can minimize errors and improve workflow efficiency.
Gebruikersinstellingen is een gebied in een systeem waar gebruikers persoonlijke voorkeuren, accountinstellingen en beveiligingsinstellingen kunnen aanpassen. Typisch omvatten gebruikersinstellingen opties zoals wachtwoordwijzigingen, profielinformatie, meldingsvoorkeuren en mogelijk individuele machtigingen om toegang te krijgen tot bepaalde functies of gegevens.
In de meeste organisaties hebben alleen geautoriseerde personen toegang tot gebruikersinstellingen, meestal beheerders of systeembeheerders. Dit komt omdat de instellingen gevoelige informatie kunnen bevatten die de beveiliging van het systeem in gevaar kan brengen als deze door ongeautoriseerde personen worden gewijzigd. Beheerders kunnen gebruikersinstellingen beheren om ervoor te zorgen dat ze voldoen aan de beleidslijnen en vereisten van de organisatie en dat de integriteit van het systeem behouden blijft.
Zoekbalk: Hiermee kunnen beheerders snel gebruikers vinden door naar hun namen of andere details te zoeken.
Gebruikerslijst: Toont een lijst van gebruikers met de volgende kolommen:
Naam: De volledige naam van de gebruiker.
E-mail: Het e-mailadres van de gebruiker, dat waarschijnlijk wordt gebruikt als hun inlogidentificatie.
Beheerder: Een selectievakje dat aangeeft of de gebruiker administratieve bevoegdheden heeft. Beheerders hebben doorgaans toegang tot alle instellingen en kunnen andere gebruikersaccounts beheren.
Acties: Deze kolom bevat doorgaans knoppen of links voor het uitvoeren van acties zoals het bewerken van gebruikersgegevens, het resetten van wachtwoorden of het verwijderen van het gebruikersaccount.
Gebruiker toevoegen-knop: Deze knop wordt gebruikt om nieuwe gebruikersaccounts aan te maken. Door erop te klikken verschijnt doorgaans een formulier waarin je de gegevens van de nieuwe gebruiker kunt invoeren, zoals hun naam, e-mail en of ze beheerdersrechten moeten hebben.
Toegang tot gebruikersbeheer: Navigeer naar Instellingen - Globale instellingen - Groepen, Gebruiker en Machtigingen - Gebruiker, waar je nieuwe gebruikers kunt toevoegen.
Nieuwe gebruiker toevoegen: Klik in de gebruikersinstellingen op “Voeg gebruiker toe”
Vul het formulier in: Er verschijnt een formulier waarin je de informatie voor de nieuwe gebruiker kunt invoeren. Typische informatie omvat:
Gebruikersnaam: Unieke naam voor de gebruiker die wordt gebruikt om in te loggen.
Voornaam en Achternaam: Naam van de gebruiker.
E-mailadres: Het e-mailadres van de gebruiker dat wordt gebruikt voor communicatie en meldingen.
Wachtwoord: Een wachtwoord voor de gebruiker dat moet voldoen aan de beveiligingsbeleid.
Gebruikersrol: Stel de rol van de gebruiker in, bijv. Standaard gebruiker of beheerder.
Bedrijfsnaam: De juridische naam van het bedrijf zoals geregistreerd.
Straat + Nummer: Het fysieke adres van het hoofdkantoor of de hoofdvestiging van het bedrijf.
Postcode: De ZIP- of postcode voor het adres van het bedrijf.
Stad: De stad waarin het bedrijf is gevestigd.
Staat: De staat of regio waar het bedrijf is gevestigd.
Land: Het land waar het bedrijf actief is.
Bedrijfs-ID: Een unieke identificatie voor het bedrijf, die intern of voor integraties met andere systemen kan worden gebruikt.
BTW-ID: Het belastingidentificatienummer voor het bedrijf, belangrijk voor financiële operaties en rapportage.
Handelsregister-ID: Het registratienummer van het bedrijf in het handelsregister, wat belangrijk kan zijn voor juridische en officiële documentatie.
Officieel Bedrijfstelefoonnummer: Het primaire contactnummer voor het bedrijf.
Officiële Bedrijfse-mail: Het belangrijkste e-mailadres dat zal worden gebruikt voor officiële communicatie.
De hier ingevoerde informatie kan cruciaal zijn voor het waarborgen dat documenten zoals facturen, officiële correspondentie en rapporten correct zijn opgemaakt met de juiste bedrijfsgegevens. Het helpt ook bij het handhaven van consistentie in de manier waarop het bedrijf wordt weergegeven in verschillende externe communicatie en documenten. Na het invoeren of bijwerken van de informatie moet de beheerder de wijzigingen opslaan door op de knop "Opslaan" te klikken om ervoor te zorgen dat alle aanpassingen systeemwijd worden toegepast.
Daarnaast biedt de sectie een overzicht van het abonnementsplan, met informatie over hoeveel dagen er nog over zijn, start- en einddata, en een abonnementsgebruikmeter die het verbruik van servicetokens bijhoudt ten opzichte van wat in het plan is toegewezen. Dit kan beheerders helpen bij het monitoren en plannen van de verlengingen of upgrades van het abonnement op basis van de gebruikstrends.
Globale Instellingen:
Bedrijfsinformatie: Definieer en bewerk basisgegevens over het bedrijf, zoals naam, adres en andere identificatoren.
Groepen, Gebruikers en Machtigingen: Beheer gebruikersrollen en machtigingen, zodat verschillende niveaus van toegang tot verschillende functies binnen DocBits mogelijk zijn.
Integratie: Stel integraties met andere software of systemen in, waardoor de functionaliteit van DocBits wordt verbeterd met externe diensten.
Documenttypes: Specificeer en beheer verschillende soorten documenten die DocBits zal verwerken, zoals facturen, bestellingen, enz.
E-mailnotificatie: Configureer instellingen voor e-mailwaarschuwingen en meldingen met betrekking tot documentverwerkingsactiviteiten.
Dashboard: Pas de dashboardweergave aan met widgets en statistieken die belangrijk zijn voor de gebruikers.
Documentverwerking:
Documentverval: Stel regels in voor hoe lang documenten worden bewaard voordat ze worden gearchiveerd of verwijderd.
Importeren: Configureer hoe documenten in DocBits worden geïmporteerd, inclusief broninstellingen en bestandstypen.
OCR-instellingen: Pas instellingen aan voor Optical Character Recognition (OCR) die afbeeldingen van tekst omzet in machine-gecodeerde tekst.
Classificatie en Extractie: Definieer hoe documenten worden gecategoriseerd en hoe gegevens uit documenten worden geëxtraheerd.
Masterdata Lookup: Stel opzoekingen in voor het valideren of aanvullen van geëxtraheerde gegevens met reeds bestaande masterdata.
Waardenlijst: Beheer vooraf gedefinieerde lijsten die worden gebruikt bij gegevensinvoer en validatie.
Exporteren: Configureer hoe en waar verwerkte documenten en gegevens worden geëxporteerd.
Module: Extra modules die kunnen worden geconfigureerd om de functionaliteit uit te breiden.
API-licenties: Beheer API-sleutels en monitor gebruiksstatistieken voor API's die door DocBits worden gebruikt.
Leverancierinstellingen: Configureer en beheer instellingen die specifiek zijn voor leveranciers, mogelijk geïntegreerd met leveranciersbeheersystemen.
Cachebeheer: Pas instellingen aan die verband houden met het cachen van gegevens om de prestaties van het systeem te verbeteren.
Logging in as an administrator: Log in with your administrator privileges.
Accessing user management: Navigate to user settings where you can edit existing users.
Selecting users: Find and select the user whose details you want to change. This can be done by clicking on the username or an edit button next to the user.
Editing user details: A form will appear containing the user’s current details. Edit the required fields according to the changes you want to make. Typical details to edit include:
First and last name
Email address
User role or permission level
Saving Changes: Review any changes you made and click Save to save the new user details.
Review the impact of role changes: If you changed the user's user role, review the impact of that change on security access levels. Make sure that after the role change, the user has the permissions required to continue performing their duties.
Send notification (optional): You can send a notification to the user to inform them of the changes made.
After completing these steps, the user details are successfully updated and the user has the new information and permissions according to the changes made.
Selecting Users: Find and select the user whose access you want to remove. This can be done by clicking the user name or an edit button next to the user.
Removing Access: Click "delete" to remove the user.
Confirmation: You are asked to confirm the user’s removal.
Optional Notification: You can optionally send a notification to the user to inform them of the removal of their access.
Review Tasks and Documents: Before removing the user, review what tasks or documents are assigned to the user. Move or transfer responsibility for those tasks or documents to another user to make sure nothing gets lost or left unfinished.
Save Changes: Confirm the user’s removal and save the changes.
By following these steps, you can ensure that the user’s access is safely removed while properly managing all relevant tasks and documents.
Click on the + NEW
button
The following menu will be displayed:
Enter the details of the sub-organization you want to create, the name and description, then click on the SAVE
button. You should then find your newly created sub-organization at the bottom of the list of existing sub-organizations.
Manage Users:
Adding Users: Administrators can create new user accounts and assign them to the appropriate sub-organization.
Assigning Roles and Permissions: Administrators can set the roles and permissions for users within a suborganization. This typically involves assigning access rights to specific documents, folders or functions in the document management system.
Manage profile details: Administrators can edit profile details of users within the sub-organization, such as: B. Contact information or department affiliation. This allows for updated and accurate management of user data.
You can add a new user to the organization and have the option of whether to remove the user from other groups.
Edit User:
Editing suborganization settings: Administrators can edit the settings and properties of a suborganization, including its name, description, or hierarchy level within the system.
Edit user details: Administrators can edit the details of individual users within a sub-organization, for example to adjust their access rights or update their contact information.
Delete User:
Deleting sub-organizations: Administrators may also have the ability to delete sub-organizations if they are no longer needed or if a restructuring of the organizational structure is required. When deleting a suborganization, administrators must ensure that all users and data associated with it are handled properly.
These management features enable administrators to effectively manage and adapt the user accounts and organizational structures within a document management system to meet the company's changing needs and processes.
Groepen en Machtigingen Activeren/Deactiveren: Een schakelaar die de systeembeheerder in staat stelt om het gebruik van groepen en machtigingen op het platform in te schakelen of uit te schakelen. Wanneer deze is uitgeschakeld, kan het systeem standaard terugvallen op een minder gedetailleerd toegangscontrolesysteem.
Groepenlijst: Toont de lijst van beschikbare gebruikersgroepen binnen de organisatie. Elke groep kan worden geconfigureerd met specifieke machtigingen. Beheerders kunnen nieuwe groepen toevoegen door op de knop "+ Nieuw" te klikken.
Machtigingentabel:
Wordt weergegeven zodra een groep is geselecteerd of een nieuwe groep wordt geconfigureerd.
Lijst alle documenttypes die door het systeem worden herkend (bijv. FACTUUR, KREDIETNOTA, LEVERINGSCONFIRMATIE).
Voor elk documenttype zijn er selectievakjes voor verschillende machtigingen:
Bekijken: Machtiging om het document te zien.
Bijwerken: Machtiging om het document te wijzigen.
Verwijderen: Machtiging om het document uit het systeem te verwijderen.
Eerste Goedkeuring: Machtiging om de initiële goedkeuring van het document uit te voeren.
Tweede Goedkeuring: Machtiging om een secundaire goedkeuring uit te voeren (indien van toepassing).
Navigate to Group Settings: Log in to your admin account and go to Group Settings in the admin panel.
This window will open:
Click the + New button: If you want to add a new group, click the + New button to start the process of adding a new group.
Fill out the table: Provide the group name and a description of the group.
Save the details: Once you have filled in the group and description, click the "Save" button.
Edit Groups: To edit groups, click on "edit", here you can change the group name
Enable Groups & permissions: To make the group visible, "Groups & permissions" must be enabled.
Check the results: After saving, review the results to make sure the group was successfully added, edited, or updated.
Het creëren van suborganisaties binnen een documentbeheersysteem dient om de structuur en het beheer van gebruikersaccounts, documenten en workflows binnen een organisatie verder te organiseren en te differentiëren. Hier zijn enkele doeleinden en voordelen van het creëren van suborganisaties:
Structurering en organisatie: Suborganisaties maken het mogelijk om een hiërarchische structuur binnen het documentbeheersysteem te creëren. Dit kan helpen om gebruikersaccounts en documenten te organiseren op basis van afdeling, team, locatie of andere relevante criteria voor duidelijker en efficiënter beheer.
Beheer van rechten: Door suborganisaties te creëren, kunnen beheerders gedetailleerde rechten en toegangscontroles instellen voor verschillende groepen gebruikers. Dit betekent dat specifieke gebruikers of groepen alleen toegang hebben tot de documenten en middelen die relevant zijn voor hun respectieve suborganisatie, wat de beveiliging en privacy verbetert.
Workflows en samenwerking: Suborganisaties kunnen samenwerking en communicatie binnen specifieke teams of afdelingen vergemakkelijken door toegang tot gedeelde documenten, projecten of taken te centraliseren. Dit bevordert efficiëntie en coördinatie bij het samenwerken aan gemeenschappelijke projecten of workflows.
Rapportage en analyse: Door gebruikersaccounts en documenten in suborganisaties te organiseren, kunnen gedetailleerde rapporten en analyses worden gemaakt over de activiteiten en prestaties van individuele teams of afdelingen. Dit stelt beheerders en managers in staat om inzicht te krijgen in het gebruik van het documentbeheersysteem op organisatieniveau.
Schaalbaarheid en flexibiliteit: Suborganisaties bieden een schaalbare structuur die kan meegroeien met de groei en veranderingen van de organisatie. Nieuwe teams of afdelingen kunnen eenvoudig worden toegevoegd en op de juiste manier worden geïntegreerd in het bestaande suborganisatieschema zonder de algehele structuur van het documentbeheersysteem te beïnvloeden.
Over het algemeen stellen suborganisaties in staat tot een effectievere beheer en organisatie van gebruikersaccounts, documenten en workflows binnen een documentbeheersysteem door de structuur, beveiliging en samenwerking te verbeteren.
Ga naar Instellingen, Globale Instellingen → Groepen, Gebruikers en Rechten → Suborganisaties zoals hieronder weergegeven.
U wordt vervolgens naar een pagina geleid die er ongeveer zo uitziet:
Hier vindt u uw eerder gemaakte suborganisaties, evenals waar u nieuwe suborganisaties kunt creëren.
De integratie-instellingen spelen een cruciale rol in de efficiëntie en functionaliteit van Docbits, omdat ze zorgen voor een naadloze interactie met andere tools en diensten. Hier zijn enkele redenen waarom het belangrijk is om de integratie-instellingen goed te configureren:
De integratie-instellingen spelen een cruciale rol in de efficiëntie en functionaliteit van DocBits, omdat ze zorgen voor een naadloze interactie met andere tools en diensten. Hier zijn enkele redenen waarom het belangrijk is om de integratie-instellingen goed te configureren:
Verhoogde efficiëntie: Integratie met andere tools en diensten kan workflows stroomlijnen en de efficiëntie verhogen. Bijvoorbeeld, documenten kunnen automatisch worden uitgewisseld tussen Docbits en een CRM-systeem, waardoor handmatige invoer wordt verminderd en de productiviteit toeneemt.
Gegevensconsistentie: Integratie maakt het mogelijk om gegevens naadloos uit te wisselen tussen verschillende systemen, wat de gegevensconsistentie en nauwkeurigheid verbetert. Dit voorkomt inconsistenties of dubbele gegevensinvoer die tot fouten kunnen leiden.
Realtime updates: Integratie maakt realtime updates mogelijk tussen verschillende platforms, zodat gebruikers altijd de meest actuele informatie hebben. Dit is vooral belangrijk voor kritieke bedrijfsprocessen die realtime informatie vereisen.
Taakautomatisering: Integratie maakt het mogelijk om routinetaken te automatiseren, wat tijd en middelen bespaart. Bijvoorbeeld, meldingen kunnen automatisch worden geactiveerd wanneer een bepaalde gebeurtenis zich in een ander systeem voordoet, zonder dat handmatige tussenkomst nodig is.
Verbeterde gebruikerservaring: Een goed geconfigureerde integratie zorgt voor een naadloze gebruikerservaring, aangezien gebruikers niet tussen verschillende systemen hoeven te schakelen om relevante informatie te verkrijgen. Dit verbetert de gebruikers tevredenheid en draagt bij aan de efficiëntie.
Om de integratie-instellingen goed te configureren, is het belangrijk om de vereisten van de organisatie te begrijpen en ervoor te zorgen dat de integratie naadloos wordt geïntegreerd in bestaande workflows en processen. Dit vereist grondige planning, configuratie en monitoring van de integratie om ervoor te zorgen dat deze soepel werkt en de gewenste waarde levert.
Check the permission settings for the documents or resources in question to ensure that users have the necessary permissions.
Make sure that users have access either directly or through group membership.
Check that the affected users are actually members of the groups that have been granted access.
Make sure that users have not been accidentally removed from relevant groups.
Check that individual permissions have been set at the user level that could override group permissions.
Make sure that these individual permissions are configured correctly.
Make sure that permissions are inherited correctly and are not blocked by parent folders or other settings.
Check permission history or logs to see if there have been any recent changes to permissions that could be causing the current issues.
Try accessing the affected documents with a different user account to see if the issue is user-specific or affects all users.
Make sure users are getting accurate error messages indicating permission issues. This can help you pinpoint and diagnose the problem more accurately.
If all other solutions fail, try reconfiguring permissions for the affected users or groups and ensure that all required permissions are granted correctly.
By following these troubleshooting tips, you can identify and resolve permission-related issues to ensure that users have the required access rights and can work effectively.
First check which type of M3 your organization is currently using, V1 or V2.
Best practices for configuring and maintaining integration settings help ensure the efficiency, security, and reliability of the integration between DocBits and your Identity Service Provider (IdP).
Here are some best practices:
Regularly review settings: Perform regular reviews of integration settings to ensure all configurations are correct and up-to-date. Changes to systems or policies may require updates to the integration.
Certificate and metadata updates: Monitor SAML certificate and metadata expiration dates and update them in a timely manner to avoid service disruptions. Use automated processes or reminders to ensure no expiration dates are missed.
Security-conscious credential management: Treat credentials such as API keys or certificates with the utmost confidentiality and protect them from unauthorized access. Use secure methods for storing and exchanging credentials to ensure the integrity of the integration.
Documentation and logging of changes: Record and log all changes to integration settings in detailed documentation. This allows you to track changes and revert to previous configurations when needed.
Training administrators: Ensure that the administrators responsible for configuring and maintaining integration settings have the necessary knowledge and skills. Provide training and resources to ensure they understand and can implement integration best practices.
Setting up alerts and notifications: Configure alerts and notifications for critical events such as certificate expiration dates or failed authentication attempts. This will allow you to identify potential issues early and proactively address them.
By following these best practices, you can ensure that the integration between DocBits and your identity service provider works smoothly, is secure, and meets the needs of your organization.
Het in- of uitschakelen van het machtigingssysteem met de schakelaar heeft verschillende effecten op de functionaliteit in DocBits.
Wanneer het machtigingssysteem is ingeschakeld, worden de toegangsrechten voor gebruikers en groepen toegepast.
Gebruikers krijgen alleen toegang tot de bronnen waarvoor ze expliciet zijn geautoriseerd op basis van de toegewezen machtigingen.
Beheerders kunnen de machtigingen voor individuele gebruikers en groepen beheren en ervoor zorgen dat alleen geautoriseerde personen de gegevens kunnen bekijken of bewerken.
Wanneer het machtigingssysteem is uitgeschakeld, worden alle toegangsrechten verwijderd en hebben gebruikers doorgaans onbeperkte toegang tot alle bronnen.
Dit kan nuttig zijn wanneer open samenwerking tijdelijk vereist is zonder de beperkingen van toegangscontrole.
Er kan echter een verhoogd risico zijn op gegevenslekken of ongeautoriseerde toegang, aangezien gebruikers mogelijk toegang hebben tot gevoelige informatie waarvoor ze niet zijn geautoriseerd.
Het in- of uitschakelen van het machtigingssysteem is een belangrijke beslissing op basis van de beveiligingseisen en de manier waarop de organisatie opereert. In omgevingen waar privacy en toegangscontrole cruciaal zijn, is het gebruikelijk om het machtigingssysteem ingeschakeld te laten om de integriteit en vertrouwelijkheid van gegevens te waarborgen. In andere gevallen kan het tijdelijk noodzakelijk zijn om het machtigingssysteem uit te schakelen om samenwerking te vergemakkelijken, maar dit moet met voorzichtigheid worden gedaan om mogelijke beveiligingsrisico's te minimaliseren.
Key: This is the unique identifier used by external applications to access DocBits' API. It is crucial for authenticating requests made to DocBits from other software.
Actions such as view, regenerate, or copy the API key can be performed here, depending on the specific needs and security protocols.
Entity ID: This is the identifier for DocBits as a service provider in the SSO configuration. It uniquely identifies DocBits within the SSO framework.
SLO (Single Logout) URL: The URL to which SSO sessions are sent to log out simultaneously from all applications connected via SSO.
SSO URL: The URL used for initiating the single sign-on process.
Actions such as "Download Certificate" and "Download Metadata" are available for obtaining necessary security certificates and metadata information used in setting up and maintaining SSO integration.
See Setup SSO
Tenant ID: This might be used when DocBits integrates with cloud services that require a tenant identifier to manage data and access configurations specific to the company using Docbits.
Upload file: Allows the admin to upload configuration files or other necessary files that facilitate integration with an identity provider.
Configure: A button to apply or update the settings after making changes or uploading new configurations.
Deze gids legt uit hoe beheerders toegangscontrole-instellingen kunnen definiëren voor verschillende gebruikersgroepen in DocBits. Elke groep kan worden geconfigureerd met aangepaste machtigingen op document- en veldniveau.
Het toegangscontrolepaneel stelt de beheerder in staat om gebruikersgroepen en hun respectieve machtigingen te beheren. Elke groep kan specifieke configuraties hebben met betrekking tot:
Documenttoegang: Of de groep toegang heeft tot een documenttype.
Veldniveau-machtigingen: Of de groep bepaalde velden binnen een document kan lezen, schrijven of bekijken.
Actiemachtigingen: Welke acties de groep kan uitvoeren, zoals bewerken, verwijderen, massaal bijwerken en goedkeuren van documenten.
Navigeer naar Instellingen.
Selecteer Documentverwerking.
Selecteer Module.
Activeer Toegangscontrole door de schuifregelaar in te schakelen.
Navigeer naar Instellingen.
Navigeer Algemene Instellingen.
Selecteer Groepen, Gebruikers en Machtigingen.
Selecteer Groepen en rechten.
Om machtigingen voor een groep, zoals PROCUREMENT_DIRECTOR, te beheren, klik op de drie stippen aan de rechterkant van het scherm.
Selecteer Toegangscontrole Weergeven.
Overzicht van Toegangscontrole:
In deze sectie kunt u de toegang voor alle documenttypes inschakelen of uitschakelen, zoals Invoice, Credit Note, Purchase Order, en meer.
U kunt toegangslevels definiëren zoals:
Toegang: Geeft toegang tot het documenttype.
Lijst: Bepaalt of het documenttype zichtbaar is in de lijstweergave.
Weergave: Specificeert de standaardweergave voor het document.
Bewerken: Geeft toestemming om het document te bewerken.
Verwijderen: Staat de groep toe documenten te verwijderen.
Massa-update: Maakt massa-update van het documenttype mogelijk.
Goedkeuringsniveaus: Stelt het vermogen van de groep in om documenten goed te keuren (Eerste en Tweede goedkeuring).
Document ontgrendelen: Bepaalt of de groep een document kan ontgrendelen voor verdere bewerkingen.
Voorbeeldinstellingen voor PROCUREMENT_DIRECTOR:
Invoice: Ingeschakeld voor alle machtigingen, inclusief bewerken en verwijderen.
Purchase Order: Ingeschakeld met normale machtigingen voor alle acties.
Veldniveau-machtigingen:
Binnen elk documenttype kunnen specifieke velden worden geconfigureerd met verschillende niveaus van machtigingen.
Machtigingen omvatten:
Lezen/Schrijven: Gebruikers kunnen zowel lezen als schrijven naar het veld.
Lezen/Eigenaar Schrijven: Alleen de eigenaar van het document of veld kan schrijven, anderen kunnen lezen.
Alleen Lezen: Gebruikers kunnen het veld alleen bekijken maar niet wijzigen.
Eigenaar Lezen/ Eigenaar Schrijven: Alleen de eigenaar van het document of veld kan schrijven en lezen.
Goedkeuring: Wijzigingen moeten worden goedgekeurd door bevoegde gebruikers of de beheerder.
Geen: Er zijn geen specifieke machtigingen van toepassing op het veld.
In document processing, APIs can be used to automate various tasks such as extracting text from documents, analyzing document contents, converting between different file formats, and more. Here are some examples of APIs in document processing:
OCR (Optical Character Recognition) API: This type of API allows you to extract text from images or scanned documents. For example:
NLP (Natural Language Processing) API: This API enables analysis of text content in documents, including keyword identification, entity recognition, sentiment analysis, etc. For example:
Conversion API: This type of API allows conversion between different file formats, for example from PDF to Word or from Word to PDF. For example:
Document Management API: This API allows you to upload, download and manage documents in a document management system. For example:
These examples show how APIs can be used in document processing to automate various tasks and improve efficiency. The exact functionality and syntax of the API depends on the particular API and its specific features.
Instructions for viewing, copying or regenerating the API key
API key management is an important aspect when it comes to the security of integrations and access to external services through APIs. Here are some steps to manage API keys and best practices for their security:
View and copy the API key:
Navigate to the API key settings in your DocBits account. Here you can find the API key, click "Copy" to copy the key.
Handling API keys with security in mind:
Treat API keys like sensitive credentials and never share them with anyone. Store API keys securely and use encryption if you need to store them locally. Update API keys regularly to ensure security and minimize the risk of unauthorized access. Avoid using API keys in public repositories or unsecured environments as they could potentially be intercepted by attackers.
Limit API key permissions:
Give API keys only the permissions required for the specific integration or service. Avoid excessive permissions to minimize the risk of abuse. Regularly review API key permissions and remove unnecessary permissions when they are no longer needed.
Logging and monitoring API calls:
Implement logging and monitoring of API calls to detect suspicious activity or unusual patterns that could indicate potential security breaches. Respond quickly to suspicious activity and, if necessary, revoke affected API keys to minimize the risk of further damage.
By carefully managing and securing API keys, organizations can ensure that their integrations and access to external services via APIs are protected and the risk of unauthorized access is minimized.
Configuring Single Sign-On (SSO) in DocBits requires a few steps to set up and configure. Here is a step-by-step guide:
Accessing SSO settings:
Log in to your DocBits account as an administrator.
Navigate to settings and look for Single Sign-On or SSO.
Configuring SSO parameters:
Enter the required SSO parameters such as the Entity ID, Single Log-Out (SLO) URL, and Single Sign-On (SSO) URL.
The Entity ID is a unique identifier for your service or application.
The SLO URL is the URL used for Single Log-Out to log users out of all services when needed.
The SSO URL is the URL that will redirect users to the Identity Provider for authentication.
Download certificates and metadata:
The identity provider (IdP) typically provides a certificate that DocBits uses to verify the SAML authentication response.
Download the certificate and store it securely.
The metadata download contains all the necessary configuration information for SSO integration. This includes information such as the entity ID, SSO URL, certificate information, and more.
Download the metadata and store it locally or provide it to the identity provider.
Identity provider (IdP) configuration:
Log in to the identity provider and configure the application or service for SAML integration.
Use the downloaded metadata or the manually entered SSO parameters to add DocBits as a trusted application or service.
Make sure the IdP's configuration matches the SSO parameters specified in DocBits.
Testing SSO integration:
After the configuration is complete, perform a test of the SSO integration to ensure that users can successfully log in to DocBits using SSO.
Also, verify that Single Log-Out is working properly by logging out of DocBits and ensuring that you are logged out of other connected services as well.
Setting up SSO properly allows users to seamlessly log in to Docbits using their existing credentials, improving the user experience and increasing security.
Configuring the Identity Service Provider (IdP) to integrate with DocBits requires a few specific steps.Here is a guide to doing that:
Accessing the IdP configuration interface
Log in to your Identity Service Provider (IdP) as an administrator.
Navigate to the settings or configuration interface dedicated to managing SAML integrations.
Entering the Tenant ID:
Look for the section that allows configuration for new SAML integrations.
Enter the DocBits tenant ID. This ID identifies your Docbits account to the IdP and enables secure communication between the two systems.
Importing the required files:
DocBits usually requires downloading metadata or adding specific configuration details. Check your IdP's documentation to see what steps are required.
Download the DocBits metadata file or import it into your IdP's configuration menu. Alternatively, you can manually enter the required configuration details, depending on what your IdP supports.
Configure integration settings:
Make sure the integration settings, such as the SSO URL, Entity ID, and SAML certificate, are correct.
Check that the Single Log-Out (SLO) URL and other required parameters are configured correctly. These are critical for smooth authentication and logout via SAML.
Verify configuration:
Take time to make sure all information entered is correct and that there are no typos or misconfigurations.
Run tests to ensure that users can successfully log into Docbits via SAML and that Single Log-Out is working properly.
Security considerations:
Make sure all transferred files and configuration details are handled securely to avoid data leaks or unauthorized access.
Protect sensitive information such as SAML certificates and credentials from unauthorized access and store them in a safe location.
How to set up SSO with INFOR Portal V2
URL starts with https://mingle-portal.eu1.inforcloudsuite.com/<TENANT_NAME> followed by your personal extension
Choose the option Cloud Identities and use your login details
On the new Portal, the way you access this menu now is by selecting the OS option in the left menu. If you do not see it in the menu, click on See More to view all applications.
Select Security, in the OS menu, to be taken to the area for adding a new service provider. The steps are the same from this point on.
Then you need to select in the left hand side menu the option Security Administration and Service Provider.
You will see this window with the Service Providers.
Now click on the “+” sign and add our DocBits as Service Provider.
Log in on URL https://app.docbits.com/ with the login details you received from us.
Go to SETTINGS (on top bar) and select INTEGRATION, under SSO Service Provider Settings you will find all the information you need for the following steps.
Download the certificate
Filling the Service Provider with the help of SSO Service Provider Settings in DocBits
When you have filled out everything remember to save it with the disk icon above Application Type
Enter the service provider DocBits again.
Click on view the Identity Provider Information underneath.
File looks like this: ServiceProviderSAMLMetadata_10_20_2021.xml
Import the SAML METADATA in the SSO Settings.
Go to IDENTITY SERVICE PROVIDER SETTINGS, which is located under INTEGRATIONS in SETTINGS. Enter your Tenant ID (e.g. FELLOWPRO_DEV) and underneath that line you see the Upload file and the IMPORT Button, where you need to upload the previously exported SAML METADATA file.
Click on IMPORT and then choose the METADATA file that you have already downloaded from the SSO SERVICE PROVIDER SETTINGS
Click on CONFIGURE
Final Step
Log out of DocBits.
Go back to the left menu in Infor and select the application you just created.
You will be taken to the Dashboard of DocBits.
Using DocBis with your Microsoft Login without using a (separate) password
Perform the following steps to add SAML SSO in Azure AD:
In Azure, go to your `Azure Active Directory` console
In the left panel, click `Enterprise applications`
Click `+ New application
Click `+ Create your own application`
Enter a name for your application. Keep the remaining default selections.
Click on `Create`
Next, assign users or groups to the SSO configuration.
Important: You should already have created users and groups in Azure AD. If you don’t have any users or groups, create them now before proceeding.
Under `Getting Started`, click `Assign Users and Groups`.
Click `+ Add user`
Select the users and groups you want to assign to this SSO configuration. These users will be able to authenticated in DocBits (using SSO).
Click `Select`
When you’re satisfied with your selection, click `Assign`
Go to the `Groups` view list and find the assigned groups.
Next, you need to finish setting up single-sign-on in Azure.
In the left panel, click `Single sign-on`
Click `SAML`
Click `Upload metadata file`
Upload the DocBits metadata.xml, which you can find in the Settings menu Integration under SSO Service Provider Settings of your DocBits account.
Edit the `Basic SAML Configuration`
Check if the `Entity ID`, `ACS URL`, `Sign on URL` and `Logout URL` are populated right.
Download the newly generated Federation Metadata XML.
Upload the FederationMetadata.xml into the Identity Service Provider Settings of your DocBits account which you can find in the Settings menu Integration.
Click on OS in the left menu (like before), you will be taken to a menu where you need to select Portal. Next, click on + Add Application on the right. Fill in the following information, the URL field is the SSO Endpoint URL from the Integration area in your DocBits settings. A Logical ID will also be generated for you, when done click save.
Application Type
DEFAULT_SAML
Display Name
DocBits
Entity ID
See Entity ID under SSO SERVICE SETTINGS
SSO Endpoint
Copy the SSO URL from SSO SERVICE SETTINGS and paste it in the SSO Endpoint field.
SLO Endpoint
Copy the SLO URL from SSO SERVICE SETTINGS and paste it in the SSO Endpoint field.
Signing Certificate
Upload the appropriate .cer file you have downloaded in step 3c) from SSO SERVICE SETTINGS
Name ID Format and Mapping
email address
Here are troubleshooting steps for common issues while integrating DocBits with an Identity Service Provider (IdP) or other services:
Verify configuration: Make sure the integration settings in DocBits are correctly configured and match your Identity Service Provider's requirements. In particular, check the SSO URL, Entity ID, and certificates.
Monitor logs: Monitor logs in both DocBits and on the Identity Service Provider's side to identify any error messages or warnings. These logs can provide clues as to where the problem lies.
Verify network connections: Make sure there are no network issues that could affect communication between DocBits and your Identity Service Provider. Check firewall settings, DNS configurations, and network access rules.
Test SSO processes: Perform test logins via Single Sign-On (SSO) to ensure users are successfully authenticated and redirected to DocBits. Be sure to note any error messages or redirection issues.
Checking permissions: Make sure users in DocBits have the required permissions to access the appropriate features or resources. Check the assignment of groups and roles.
Updating certificates: Make sure the SAML certificates and metadata are up to date on both sides. If a certificate has expired, you must update it and retest the integration.
Communicating with support: If you cannot identify the cause of the problem, contact DocBits support or your identity service provider. They can help you troubleshoot and provide specific guidance for your setup.
By following these troubleshooting steps, you can quickly identify and resolve the most common issues during the integration between Docbits and your identity service provider.
By accurately classifying documents into specific types, they can be easily categorized and managed. This makes it easier to find and retrieve documents when they are needed.
Automated workflows:
Many document management systems, including DocBits, use document types to drive automated workflows. For example, invoices can be automatically routed for approval while contract documents are sent for signature. Correct document type mapping allows these processes to be carried out efficiently and without errors.
Rights management and security:
Different document types can be subject to different access controls and security levels. By correctly typecasting documents, it can be ensured that only authorized people have access to sensitive information.
Compliance and legal requirements:
Many industries are subject to strict legal and regulatory requirements regarding the handling of documents. Setting up document types correctly helps ensure that all necessary compliance requirements are met by handling and storing documents according to their category.
Defining specific document types:
Every type of document managed in the system should have a clearly defined document type. This includes, for example, invoices, contracts, reports, emails and technical drawings.
Attribution and metadata:
Each document type should have specific attributes and metadata that facilitate its classification and processing. For example, invoices could contain attributes such as invoice number, date and amount, while contracts have attributes such as contract parties, term and conditions.
Automation rules and workflows:
Specific rules and workflows should be defined for each document type. This can include automatic notifications, approval processes or archiving policies.
Training and user guidance:
Users should be trained to use the document types correctly and understand the importance of correct classification. This helps to minimize errors and maximize efficiency.
Regular review and adjustment:
The document types and associated processes should be regularly reviewed and adjusted as necessary to ensure they continue to meet current business needs and processes.
Setting up document types correctly is a key aspect of effectively using a document management system like DocBits. Not only does it make documents better organized and easier to find, it also enables automated processes, increases security, and ensures regulatory compliance. To fully realize the benefits, document types must be carefully defined, the corresponding processes implemented, and users trained regularly.
Layout Configuration Description:
The layout configuration determines the structure and appearance of a document type.
Options:
Templates:
Upload or create document templates that define the general layout.
Zones:
Specify specific areas (zones) on the document, e.g. header, footer, content area.
Impact:
Improved accuracy:
By accurately defining layouts, systems can better identify where to find certain information, improving the accuracy of data extraction.
Consistency:
Ensuring that all documents of a type have a consistent layout makes processing and review easier.
Field Definitions Description:
Fields are specific data points extracted from documents.
Options:
Field name: The name of the field (e.g. "Invoice number", "Date", "Amount").
Data type: The type of data contained in the field (e.g. text, number, date).
Format: The format of the data (e.g. date in DD/MM/YYYY format).
Required field: Indicates whether a field is mandatory.
Impact:
Data extraction accuracy:
By defining fields precisely, the correct data can be extracted precisely.
Error reduction:
Clear specification of field formats and data types reduces the likelihood of errors during data processing.
Automated validation:
Required fields and specific formats enable automatic validation of the extracted data.
Extraction rules Description:
Rules that determine how data is extracted from documents.
Options:
Regular expressions:
Using regular expressions to match patterns.
Anchor points: .
Using specific text anchors to identify the position of fields. Artificial intelligence: Using AI models for data extraction based on pattern recognition and machine learning
Impact:
Precision:
By applying specific extraction rules, data can be extracted precisely and reliably.
Flexibility:
Customizable rules make it possible to adapt the extraction to different document layouts and contents.
Efficiency:
Automated extraction rules reduce manual effort and speed up data processing.
Validation rules Description:
Rules for checking the correctness and completeness of the extracted data.
Options:
Format check: Validating the data format (e.g. whether a date is correctly formatted).
Value check: Checking whether the extracted values are within a certain range.
Cross-check: Comparing the extracted data with other data sources or data fields in the document.
Impact:
Data quality:
Ensuring that only correct and complete data is stored.
Error prevention:
Automatic validation reduces the risk of human error.
Compliance:
Adhering to regulations and standards through accurate data validation.
Automation workflows Description:
Workflows that automate the processing steps of a document type.
Options:
Approval processes:
Automatic forwarding of documents for approval.
Notifications:
Automatic notifications for certain events (e.g. receipt of an invoice).
Archiving:
Automatic archiving of documents according to certain rules. Impact:
Increased efficiency:
Automated workflows speed up processing and reduce manual effort.
Transparency:
Clear and traceable processes increase the transparency and traceability of document processing.
Compliance:
Automated workflows ensure that all steps are carried out in accordance with internal guidelines and legal regulations.
User rights and access control Description:
Control of access to document types and their fields.
Options:
Role-based access control:
Specify which users or user groups can access certain document types.
Security levels:
Assign security levels to document types and fields.
Impact:
Data security:
Protect sensitive data through restricted access.
Compliance:
Compliance with data protection regulations through targeted access controls.
User-friendliness:
Adaptation of the user interface depending on role and authorization increases user-friendliness.
The extensive customization options for document types in DocBits enable precise control of document processing and data extraction. By carefully configuring layouts, fields, extraction and validation rules, automation workflows, and user rights, organizations can ensure that their documents are processed efficiently, accurately, and securely. These customization options go a long way in optimizing the overall performance of the document management system and meeting the specific needs of the organization.
A Layout Manager enables an orderly and structured presentation of information.
Setting placement rules for different data elements ensures that information is presented consistently and clearly.
Using a Layout Manager enables users to capture information more efficiently.
A well-designed layout results in users knowing intuitively where to enter specific data, which speeds up data capture and reduces the risk of errors.
Consistent layouts ensure consistency in documentation.
When different documents use the same Layout Manager, a consistent presentation of information across different documents is ensured.
This is especially important in environments where many different users access or collaborate on documents.
A Layout Manager enables the appearance of documents to be customized depending on requirements.
Depending on the type of document or specific requirements, layouts can be customized to better present different types of data or information.
A well-configured layout manager makes it easier to scale documents.
When new data needs to be added or requirements change, the layout manager can be customized to easily handle those changes without the need for a major redesign.
Overall, using a layout manager is critical to ensure that data is captured and organized accurately. A well-designed layout improves the user experience, promotes efficiency in data entry, and contributes to the consistency and adaptability of documents.
Configuring document types in Docbits requires care and expertise to ensure that document processing is efficient and accurate. Here are some best practices for configuring document types, including recommendations for setting up effective regex patterns and tips for training models to improve accuracy:
Best practices
Requirements analysis:
Conduct a thorough analysis of the requirements to understand which document types are needed and what information needs to be extracted from them.
Pilot projects:
Start with pilot projects to test the configuration and extraction rules before applying them to the entire system.
Best practices
Consistency:
Make sure that documents of one type have a consistent layout. This makes configuration and data extraction easier.
Use templates:
Use document templates to ensure consistency and simplify setup.
Best practices
Unique field names:
Use unique and meaningful names for fields to avoid confusion.
Relevant metadata:
Define only the fields that are really necessary to reduce complexity and increase efficiency.
Formatting guidelines:
Set clear formatting guidelines for each field to facilitate validation and extraction
Best practices
Use quality data:
Use high-quality and representative data to train the models.
Data enrichment:
Enrich the training dataset by adding different document examples to increase the robustness of the model.
Iterative training:
Train the model iteratively and evaluate the results regularly to achieve continuous improvements.
Tips:
Transfer learning:
Leverage pre-trained models and tune them with specific document examples to reduce training time and increase accuracy.
Hyperparameter tuning:
Experiment with different hyperparameters to find the optimal configuration for your model.
Best practices
Multi-step validation:
Implement multi-step validation rules to check the correctness of the extracted data.
Combine rule-based and ML-based approaches:
Use a combination of rule-based and machine learning approaches to extract and validate data.
Error management:
Set up mechanisms to detect and fix faulty extractions.
Best practices
Clearly defined workflows:
Define clear and traceable automation workflows for each document type.
Continuous monitoring:
Monitor automation workflows regularly to evaluate their performance and identify optimization potential.
Incorporate user feedback:
Integrate user feedback to continuously improve workflows.
Best practices
Role-based access:
Implement role-based access controls to ensure that only authorized users have access to certain document types and fields.
Regular review:
Regularly review access controls and adapt them to changing requirements.
Configuring document types in Docbits requires careful planning and continuous adjustment to achieve optimal results. By applying the best practices above, you can significantly increase the efficiency and accuracy of document processing and data extraction.
Check consistency:
Make sure all documents of the type have a consistent layout. Variations in layout can affect recognition.
Check zones and areas:
Check that the defined zones and areas are positioned correctly and cover the relevant information.
Update templates:
If the layout of the documents changes, update the templates accordingly.
Field names and data types:
Make sure field names are correct and data types are properly defined.
Formatting guidelines:
Check that the formatting guidelines for the fields are correct and match the actual data.
Check required fields:
Make sure all required fields are correctly recognized and filled in.
Test regex patterns:
Use a regex tool to test the patterns and make sure they capture the desired data correctly.
Increase specificity:
Adjust the regex patterns to be more specific and avoid misinterpretation.
Check anchor points:
Make sure the anchor points for data extraction are set correctly. If the pattern is not working correctly, check if special characters or different formats need to be considered.
Analyze error messages:
Examine the error messages and log files for evidence of incorrect validations.
Refine rules:
Adjust the validation rules to make them more flexible or stricter if necessary.
Multi-step validation:
Implement additional validation steps to improve data quality.
Collect representative data:
Make sure the training data covers a wide range of examples that reflect all possible variations.
Retrain models:
Retrain the models regularly, especially when new document variants are added.
Feedback loops:
Use feedback loops to continuously improve the models.
Review workflow steps:
Review each step in the workflow to ensure that the data is processed and routed correctly.
Analyze logs:
Analyze the workflow logs to identify and resolve sources of errors.
Collect user feedback:
Ask users about their experiences and issues with the workflows to identify potential weak points.
Review access rights:
Make sure the right users have access to the relevant document types and fields.
Track changes:
Check whether recent changes in access rights may have affected document processing.
Regular review:
Perform regular access rights reviews to ensure everything is configured correctly.
Consult documentation:
Use DocBits system documentation and support resources to find solutions to problems.
Provide training:
Make sure all users are adequately trained to avoid common errors.
Updates and patches:
Keep the system up to date by regularly applying updates and patches that contain bug fixes and improvements.
Troubleshooting document type configuration requires a systematic approach and careful review of all aspects of the configuration. By applying the tips above, you can identify and fix common problems to improve the accuracy and efficiency of document processing in DocBits.
Group related fields together to create a logical and intuitive structure. This makes it easier for users to navigate and enter data.
Arrange fields so that frequently used or important information is easily accessible and placed in a prominent location.
Identify all required data fields and mark them accordingly. Ensure users are prompted to enter all necessary information to avoid incomplete records.
Use validation rules to ensure that entered data conforms to expected formats and criteria.
Use clear and precise labels for fields to help users enter the expected data.
Add instructions or notes when additional information is required to ensure users provide the correct data.
Test the layout and data entry thoroughly to ensure that all data is captured and stored correctly. Collect feedback from users and make adjustments to continuously improve user experience and data integrity.
Check the configuration of the fields in the Layout Manager and make sure they match the actual fields in the scanned documents.
Check that the positions and dimensions of the fields in the layout are correct and that they cover all relevant information.
Check the validation rules and format settings for the affected fields to make sure that the expected data can be captured correctly.
Make sure that the OCR (Optical Character Recognition) or other data capture technologies are properly configured and calibrated to ensure accurate extraction of the data.
Check the validation rules for fields to make sure they are appropriate and configured correctly.
Adjust the validation rules if necessary to ensure that they meet the requirements and formats of the captured data.
Revise the layout to improve the structure and organization of fields and ensure that important information is easily accessible.
Run user testing to get feedback on the usability of the layout and make adjustments to increase efficiency.
By applying these best practices and troubleshooting as appropriate, you can create efficient and accurate document layouts that enable smooth data capture and processing.
Login Details to Cloud
Credentials are mandatory for accessing the Infor Cloud environment. The user should have the roles "Infor-SystemAdministrator" and "UserAdmin".
Config Admin Details (DocBits)
You should have received an email from FellowPro AG with the login details for the DocBits SSO Settings page. You will need a login and password.
Certificate
You can download the certificate in DocBits under SSO Service Provider Settings
How to set up SSO with INFOR Portal V1 (LN and older M3 Interface)
Login Details to Cloud
Credentials are mandatory for accessing the Infor Cloud environment. The user should have the roles "Infor-SystemAdministrator" and "UserAdmin".
Config Admin Details (DocBits)
You should have received an email from FellowPro AG with the login details for the DocBits SSO Settings page. You will need a login and password.
Certificate
You can download the certificate in DocBits under SSO Service Provider Settings
URL starts with https://mingle-portal.eu1.inforcloudsuite.com/<TENANT_NAME> followed by your personal extension
Choose the option Cloud Identities and use your login details
After login you will have access to the Infor Cloud. In this case we enter this page, but on the burger menu you will find access to all applications.
On the right hand side of the bar menu, you will find the user menu and there you can access the user management
Then you need to select in the left hand side menu the option Security Administration and Service Provider.
You will see this window with the Service Providers.
Now click on the “+” sign and add our DocBits as Service Provider.
Log in on URL https://app.docbits.com/ with the login details you received from us.
Go to SETTINGS (on top bar) and select INTEGRATION, under SSO Service Provider Settings you will find all the information you need for the following steps.
Download the certificate
Filling the Service Provider with the help of SSO Service Provider Settings in DocBits
Application Type
DEFAULT_SAML
Display Name
DocBits
Entity ID
See Entity ID under SSO SERVICE SETTINGS
SSO Endpoint
Copy the SSO URL from SSO SERVICE SETTINGS and paste it in the SSO Endpoint field.
SLO Endpoint
Copy the SLO URL from SSO SERVICE SETTINGS and paste it in the SSO Endpoint field.
Signing Certificate
Upload the appropriate .cer file you have downloaded in step 3c) from SSO SERVICE SETTINGS
Name ID Format and Mapping
email address
When you have filled out everything remember to save it with the disk icon above Application Type
Enter the service provider DocBits again.
Click on view the Identity Provider Information underneath.
File looks like this: ServiceProviderSAMLMetadata_10_20_2021.xml
Import the SAML METADATA in the SSO Settings.
Go to IDENTITY SERVICE PROVIDER SETTINGS, which is located under INTEGRATIONS in SETTINGS. Enter your Tenant ID (e.g. FELLOWPRO_DEV) and underneath that line you see the Upload file and the IMPORT Button, where you need to upload the previously exported SAML METADATA file.
Click on IMPORT and then choose the METADATA file that you have already downloaded from the SSO SERVICE PROVIDER SETTINGS
Click on CONFIGURE
Go to Admin settings
Click on ADD APPLICATION in the top right corner
Fill out all fields like on the following image but with your own SSO Url, don’t forget to choose an icon and click on SAVE.
Final Step
Log out of DocBits.
Go back to the burger menu in Infor and select the icon you just created.
And you will be taken to the Dashboard of DocBits.
De sectie Documenttypes vermeldt alle documenttypes die door Docbits worden herkend en verwerkt. Beheerders kunnen verschillende aspecten beheren, zoals lay-out, velddefinities, extractieregels en meer voor elk type document. Deze aanpassing is essentieel voor een nauwkeurige gegevensverwerking en naleving van de organisatorische normen.
Documenttype Lijst:
Elke rij vertegenwoordigt een documenttype zoals Factuur, Creditnota, Leveringsbon, enz.
Documenttypes kunnen standaard of op maat zijn, zoals aangegeven door labels zoals "Standaard."
Lay-out bewerken: Deze optie stelt beheerders in staat om de instellingen voor de documentlay-out te wijzigen, waaronder het definiëren van hoe het document eruitziet en waar de gegevensvelden zich bevinden.
Document Subtypes: Als er documenttypes zijn met subcategorieën, kan deze optie beheerders in staat stellen om instellingen specifiek voor elk subtype te configureren.
Tabelkolommen: Pas aan welke datakolommen moeten verschijnen wanneer het documenttype in lijsten of rapporten wordt bekeken.
Velden: Beheer de gegevensvelden die aan het documenttype zijn gekoppeld, inclusief het toevoegen van nieuwe velden of het wijzigen van bestaande.
Modeltraining: Configureer en train het model dat wordt gebruikt voor het herkennen en extraheren van gegevens uit de documenten. Dit kan inhouden dat parameters worden ingesteld voor machine learning-modellen die in de loop van de tijd verbeteren met meer gegevens.
Regex: Stel reguliere expressies in die worden gebruikt om gegevens uit documenten te extraheren op basis van patronen. Dit is bijzonder nuttig voor gestructureerde gegevensextractie.
Scripts: Schrijf of wijzig scripts die aangepaste verwerkingsregels of workflows voor documenten van dit type uitvoeren.
E-DOC: Configureer instellingen met betrekking tot de uitwisseling van documenten in gestandaardiseerde elektronische formaten. U kunt XRechnung, EDI, FakturaPA of EDI configureren.
Log in: Log in to DocBits with your administrator rights.
Navigate: Go to Settings.
Document Types: Find the "Document Types" section.
Create a new document type:
Click the "+ New" button.
Basic information:
Enter a name for the new document type (e.g. "Invoice", "Contract", "Report").
Add a description explaining the purpose and use of the document type.
Amount and date format
Enter the format for the amount and date
Import Sample Documents
Upload sample documents via drag & drop
At least 10 documents must be uploaded for the training
Add Groups
Click the "Add" button and enter the group name.
You can also clone an existing document type.
Add fields:
Add new fields by clicking "Add".
Enter the name of the field (e.g. "Invoice number", "Date", "Amount") and the data type (e.g. Text, Number, Date).
Finish
Once all the details are entered, click "Finish" and the new document type is created
Select a document type:
Select the document type you want to edit from the list of existing document types.
Under the document type you will find various editing options, for example editing the layout, fields, table columns, etc.
More Settings:
Click the Edit button next to the document type.
Here you can make further settings for the document type, such as design template, whether a document must be approved before export and many other details.
Define rules:
Go to the Extraction Rules section.
Create rules that specify how to extract data from documents. This may include using regular expressions or other pattern recognition techniques.
Test rules:
Test the extraction rules with sample documents to ensure that the data is correctly recognized and extracted.
Fine-tuning:
Adjust the extraction rules based on the test results to improve accuracy and efficiency.
Inform users:
Inform users of the new or changed document type and provide training if necessary.
Documentation:
Update system documentation to describe the new or changed document types and their usage. By carefully setting up and managing document types in DocBits, you can ensure that documents are correctly classified and processed efficiently. This improves the overall performance of the document management system and contributes to the accuracy and productivity of your organization.
Log in: Log in to DocBits with administrator rights.
Navigate: Go to Settings.
Document Types: Find the Document Types section.
Access Document Types List
Access the list of existing document types. This list shows all defined document types, both active and inactive.
Activating or deactivating a document type Select Document Type:
Select the document type you want to enable or disable.
Use the toggle function:
In the user interface, there is a toggle switch next to each document type that allows activation and deactivation.
Activation:
If the document type is currently deactivated, the switch may show a gray or off position.
Click the switch to activate the document type. The switch changes its position and color to indicate activation.
Deactivation:
If the document type is currently activated, the switch shows a colored or on position.
Click the switch to deactivate the document type. The switch changes its position and color to indicate deactivation.
Save:
Make sure all changes are saved. Some systems save changes automatically, while others require explicit confirmation.
Inform users:
Inform users about the activation or deactivation of the document type, especially if it impacts their work processes.
Update documentation:
Update system documentation to reflect the current status of document types.
Conclusion The ability to enable or disable document types depending on the organization's needs is a useful tool for managing document processing in Docbits. By simply using the toggle function in the user interface, administrators can react flexibly and efficiently and ensure that the system is optimally aligned with current business needs.
The Layout Manager allows administrators to visually configure and modify the layout of document types by setting properties for various data fields and groups within a document. This interface helps ensure that the extraction models and manual data entry points align precisely with the document's structure as scanned or uploaded into Docbits.
Groups and Fields:
Groups: Organizational units within a document type that categorize related fields (e.g., Invoice Details, Payment Details). These can be expanded or collapsed and arranged to mirror the logical grouping in the actual document.
Fields: Individual data points within each group (e.g., Invoice Number, Payment Terms). Each field can be customized for how data is captured, displayed, and processed.
Properties Panel:
This panel displays the properties of the selected field or group, allowing for detailed configuration, such as:
Label: The visible label for the field in the user interface.
Field Name: The technical identifier used within the system.
Element Width in Percentage: Determines the width of the field in relation to the document layout.
Tab Index: Controls the tabbing order for navigation.
Run Script on Change: Whether to execute a script when the field value changes.
Display Label On Left: Whether the label is displayed to the left of the field or above it.
Is Textarea: Specifies if the field should be a textarea, accommodating larger amounts of text.
Select Model Type: Option to select which model type will handle the extraction of this field.
Field Length: Maximum length of data to be accepted in this field.
Banned Keywords: Keywords that are not allowed within the field.
Template Preview:
Shows a real-time preview of how the document will appear based on the current layout configuration. This helps in ensuring that the layout matches the actual document structure and is vital for testing and refining the document processing setup.
Start by selecting the right field type for your data.
This depends on what type of information the field will contain.
Possible field types include text, number, date, drop-down menu, checkbox, etc.
Set validation rules to ensure that the data entered meets the expected criteria.
This may include checking for certain string patterns, numeric limits, date formats, or other conditions.
If certain fields typically have a default value, you can set that as the default value.
This makes data entry easier because users don't have to enter the same value every time.
Determine which user groups should have access to the field and what type of access rights they have.
This can include read, write, or edit rights.
In some cases, data from one field needs to be linked to data from another field or data source.
Configure appropriate links or relationships to ensure consistent data integration.
Determine under what conditions a field should be visible or hidden.
This can be useful for dynamically adapting the user interface based on certain data or user actions.
If necessary, enable historization of fields to track changes historically.
This allows you to track changes to the data and monitor the history of data changes.
Add notes or descriptions to explain to users how to use the field or what type of data is expected.
By following these steps and configuring the appropriate field properties, you can ensure that your documents meet specific requirements for data handling, user access, and data accuracy.
You can usually find the Template Preview option in the template editor interface.
Select the template whose layout you want to check.
This can be an existing template you want to make changes to or a new template you want to create.
Change the layout settings as needed. This can include adding, removing or adjusting groups, fields, columns, rows, fonts, etc.
As you change the layout settings, the preview updates in real time.
You can immediately see how your changes affect the look and structure of the template.
Take advantage of the ability to interactively customize the layout by moving, resizing, or making other adjustments to elements while checking the effects in real time in the preview.
Experiment with different layout configurations to find the best design for your needs.
Use the template preview to see how each change affects the final look.
Once you are happy with the layout, save your changes.
Depending on the software, you may also be able to find the option to commit your changes directly to update the template for use in other documents or processes.
Using the template preview allows you to make sure your layout meets your desired needs before committing changes. This allows you to efficiently customize the design and structure of your documents and ensure that they meet the desired visual and functional standards.
After you have made the desired customizations in the Layout Manager, look for the "Save" button to save the changes.
Click this button to save your changes in the Layout Manager. This backs up your layout customizations and ensures that they are available for future editing sessions.
Once your changes are saved in the Layout Manager, they are usually automatically applied to the document processing workflow that uses that specific document type.
New documents based on this template will inherit the updated layout settings when they are created. This means that the new documents will include the new groups, fields, or other layout customizations you made in Layout Manager.
Existing documents already created using this template may be treated differently depending on your software and configuration. In some cases, changes may be automatically applied to pre-existing documents, while in other cases, manual adjustments may be required to bring existing documents into line with the updated layout settings.
After you have saved the layout changes and they have been applied to the document processing workflow, it is advisable to test the changes to ensure that they work as intended.
Create new test documents or review existing documents to ensure that the updated layout settings are applied correctly and that data is captured and displayed as expected.
By following these steps, you can effectively save changes in Layout Manager and apply them to the document processing workflow. This ensures a smooth integration of your layout customizations into the document creation and processing process.
Navigate to the Settings area: Log in to DocBits as an administrator and navigate to the Document Type Management area.
Select the option to add a subtype: Click the “+ New” button to add a new subtype.
Name the subtype: Enter a descriptive name for the new subtype. This name should clearly describe the purpose of the subtype so that users can easily understand what type of documents it represents.
Configure initial settings: Set the initial settings for the new subtype, including the default fields, options, and templates to use for this subtype. This can include adding specific metadata fields, specifying approval workflows, or configuring user permissions.
Make optional configurations: Depending on your company's requirements or the nature of the documents, you can make additional configurations to customize the new subtype to your specific needs. This may include setting default values, validation rules, or custom actions.
Save the new subtype: Once you have entered all the required information, save the new subtype to create it in the document management system.
After the new subtype is created, users can add and manage documents of that type according to the initial settings you specified. Make sure you inform users about the new subtype and provide training or guidance, if necessary, to help them use it effectively.
Document Sub Types zijn in wezen gespecialiseerde versies van de hoofd documenttypes. Bijvoorbeeld, onder het hoofd documenttype "Factuur", kunnen er subtypes zijn zoals "Standaard Factuur", "Pro-forma Factuur" en "Credit Factuur", elk met iets andere gegevensvereisten of verwerkingsregels.
Specifieke verwerkingsvereisten: Vaak vereisen verschillende variaties van hetzelfde documenttype verschillende verwerkingsvereisten. Bijvoorbeeld, verschillende soorten facturen kunnen specifieke velden, goedkeuringsworkflows of validatieregels vereisen op basis van de interne beleidslijnen van een bedrijf of de vereisten van externe partners.
Organisatorische aanpassing: Het gebruik van subtypes stelt organisaties in staat om hun documentverwerking aan te passen aan hun specifieke behoeften. Ze kunnen subtypes creëren die precies zijn afgestemd op hun individuele bedrijfsprocessen, in plaats van te vertrouwen op generieke oplossingen die mogelijk niet aan alle vereisten voldoen.
Duidelijke structurering: Het gebruik van subtypes zorgt voor een duidelijkere structurering van documentbeheer. Gebruikers kunnen gemakkelijker navigeren tussen verschillende variaties van een documenttype en de specifieke informatie vinden die ze nodig hebben zonder afgeleid te worden door irrelevante gegevens of opties.
Consistentie en nauwkeurigheid: Subtypes kunnen helpen om consistentie en nauwkeurigheid in documentcaptatie en -verwerking te waarborgen. Door subtypes te standaardiseren, kunnen organisaties ervoor zorgen dat alle relevante informatie wordt vastgelegd en dat gegevens op een uniforme manier zijn gestructureerd.
Efficiënte verwerking: Het gebruik van subtypes kan de efficiëntie in documentverwerking verhogen omdat gebruikers toegang hebben tot vooraf gebouwde sjablonen en workflows die zijn geoptimaliseerd voor specifieke documenttypes. Dit vermindert handmatige inspanning en minimaliseert fouten of vertragingen in het proces.
Document subtypes in Docbits stellen gebruikers in staat om flexibeler en op maat gemaakt om te gaan met documentvariaties, wat resulteert in verbeterde efficiëntie, nauwkeurigheid en aanpassingsvermogen. Ze bieden een krachtige manier om de complexiteit van documentverwerking te beheren en de productiviteit binnen een organisatie te verhogen.
Lijst van Sub Types:
Elke rij vertegenwoordigt een sub-type van een primair documenttype.
Bevat de naam van het sub-type en een set acties die erop kunnen worden uitgevoerd.
Acties:
Velden: Configureer welke gegevensvelden zijn opgenomen in het sub-type en hoe ze worden beheerd.
Bewerk Lay-out: Wijzig de visuele lay-out voor hoe informatie wordt weergegeven en ingevoerd voor dit sub-type.
Scripts: Koppel of bewerk scripts die specifieke bewerkingen uitvoeren wanneer documenten van dit sub-type worden verwerkt.
Kopie: Dupliceer een bestaande sub-type configuratie om als basis voor een nieuwe te gebruiken.
Document Sub Type bewerken: Bewerk de naam of titel van het sub-type.
Verwijderen: Verwijder het sub-type als het niet langer nodig is.
Nieuwe Sub Types Toevoegen:
De "+ Nieuw" knop stelt beheerders in staat om nieuwe sub-types te creëren, waarbij unieke eigenschappen en regels worden gedefinieerd indien nodig.
Log in to DocBits and navigate to the area where you want to use the Layout Manager.
You can find this option in "Manage Document Types".
Select the document type you want to edit.
The Layout Manager will display the structure of that document type.
In the Layout Manager you will see a tree structure that represents the groups and fields of the selected document type.
You can navigate through this structure to edit the areas you want.
Click the "Create new group" button, depending on whether you want to add a new group or field.
Enter the name of the new group or field and select any settings you want, such as the type of field (text, number, date, etc.).
Select the group or field you want to remove.
Click the "Delete" button or use the appropriate keyboard shortcut (usually "Delete" or "Del").
Double-click the group or field you want to change.
Change any properties you want, such as the name, position, size, or field type settings.
Drag and drop groups or fields to change their order or place them inside or outside other groups.
Don't forget to save your changes before you leave the Layout Manager.
Click the "Save" button.
By following these steps, you can effectively navigate DocBits' Layout Manager and edit groups as well as fields within a document type. This allows you to customize the structure and appearance of your documents according to your needs.
Here are the main reasons why:
Space optimization:
Carefully selecting and arranging columns can help you minimize the amount of space your database requires.
This is especially important when working with large amounts of data, as unnecessary or redundant columns can waste resources.
Data consistency:
By ensuring that each column only contains data that is relevant to its specific purpose, you can improve the consistency of your database.
This means that your data is cleaner and more reliable, which in turn improves the quality of your reporting.
Query performance:
Well-designed table columns can significantly improve the performance of database queries. For example, putting indexes on frequently queried columns can help queries run faster.
Avoiding unnecessary columns in query results can also increase query performance.
Easier reporting:
Organizing your data into meaningful column structures makes it easier to create reports and analyses.
Well-designed table columns can also increase the readability of reports and ensure that important information is easy to find.
Future-proofing:
By setting up the right table columns from the start, you can better prepare your database for future needs.
You can more easily add new features and make changes to the data model without affecting existing data.
Overall, setting up table columns correctly helps improve the efficiency, consistency and performance of your database, which in turn increases the quality of your data storage, querying and reporting.
Use naming conventions: Use consistent and meaningful naming conventions for your document types and subtypes. This makes it easier for users and administrators to navigate and identify the different types.
Use subtypes only when necessary: Create a subtype only when it is necessary to manage variations within a main document type. If the differences between the documents are minimal, it may be more efficient to treat them as separate instances of the main type.
Logically divide documents: Subtypes should be used to create logical groupings of documents that have similar processing requirements. This can make organization and management easier by grouping similar documents together.
Regularly review and clean up: Regularly review your document types and subtypes to ensure they are up to date and meet your organization's needs. Remove types or subtypes that are no longer needed to optimize system performance and improve the user experience.
Create documentation policies: Create clear documentation policies for the use of document types and subtypes in your organization. This can include guidance on creating new types, assigning permissions, and using metadata.
Train users: Regularly train your users on the use of document types and subtypes, including proven methods and best practices. This helps increase efficiency and reduce errors.
By following these best practices, you can effectively organize and manage your document types and subtypes, resulting in better use of your document management system.
Here are some troubleshooting tips for managing sub-types:
Resolve conflicts between similar subtypes: Check for conflicts between similar subtypes that could cause confusion. Make sure that the differences between subtypes are clearly defined and that they are different in their usage. If necessary, adjust configurations to resolve conflicts.
Resolve script execution errors: Check scripts configured to run when creating or editing subtypes for errors or inconsistencies. Check the syntax and logic of the scripts to make sure they work correctly. Test the scripts in a development environment to identify and fix problems before applying them to the production environment.
Ensure configuration consistency: Make sure that configurations for subtypes are consistent and do not have inconsistencies or contradictions. Check fields, layouts, permissions, and other settings to make sure they are configured correctly and meet the requirements of the subtypes.
Implement logging and auditing: Implement logging and auditing capabilities to identify and resolve subtype management errors and issues. Monitor subtype changes and track logs to identify and resolve potential issues early.
Provide user training and support: Provide training and support to users tasked with subtype management. Ensure they have the knowledge and skills required to effectively configure and manage subtypes. Provide support for any issues or questions that arise.
By applying these troubleshooting tips, you can identify and resolve subtype management issues to ensure the efficiency and effectiveness of your document management system.
Here are some best practices:
Use meaningful column names:
Choose column names that are clear and descriptive to improve the readability and understandability of your database structure. Avoid abbreviated or cryptic names.
Name columns to accurately reflect the content or meaning of the data stored in them. This makes later querying and reporting easier.
Choose appropriate data types:
Use the smallest possible data type that adequately meets the needs of your data to save storage space and improve performance.
Consider the type of data stored and choose the data type accordingly. For example: use INTEGER for integers, VARCHAR for strings, and DATE for dates.
Understanding required columns:
Mark columns as required (NOT NULL) if they are essential to the proper operation of your application and NULL values are unacceptable.
When deciding whether to mark a column as required, make sure that the application can logically handle NULL values and that NULL values will not cause unexpected errors.
Using foreign keys for relationships:
If your database has relationships between tables, use foreign keys to define those relationships. This improves data integrity and allows referential integrity constraints to be enforced.
Be sure to consider indexing foreign keys to optimize the performance of queries that access those relationships.
Regularly review and update:
Regularly review the database structure to ensure it meets the changing needs of your application. Make updates as needed to improve the efficiency and performance of your database.
Be sure to consider feedback from users and developers to identify and implement areas for improvement.
By applying these best practices, you can create a well-organized and efficient database structure that meets the needs of your application and provides a reliable foundation for storing, querying, and reporting on your data.
Incorrect column configurations:
Problem: Data is not displayed or stored correctly, possibly due to incorrect data types, missing constraints, or insufficient column names.
Solution:
Review the column configurations in the database table and make sure the data types are appropriate for each column.
Add missing constraints such as NOT NULL or UNIQUE to improve data integrity.
Rename columns to use more meaningful and unique names that accurately describe the column's contents.
Problems caused by deleted columns:
Problem: After deleting a column from a table, problems occur because reports, queries, or application logic still reference that column.
Solution:
Review all reports, queries, and application logic to make sure there are no more references to the deleted column.
Update all affected reports, queries, and application logic to reflect or remove the deleted column. If necessary, temporarily restore the deleted column and migrate the data to a new structure before permanently deleting it.
Missing or inconsistent data:
Problem: Data is incomplete or inconsistent due to missing required fields or incorrect data types.
Solution:
Review the table structure and make sure all required fields are marked NOT NULL to ensure that important data is not missing.
Perform data cleanup to correct inconsistent or invalid data and update data types if necessary to improve consistency.
Performance issues due to missing indexes:
Problem: Queries on large tables are slow because important columns are not indexed.
Solution:
Identify the most frequently queried columns and add indexes to improve query performance.
Be aware that too many indexes can also affect write and update performance, so balanced indexing is important.
By applying these solutions, you can resolve common table column-related issues and improve the efficiency, consistency, and performance of your database.
Here is a guide on how to properly use the "Copy" and "Delete" actions for efficient subtype management:
Navigate to the sub-type management settings in your document management system.
Select the subtype you want to copy, click "Copy" and enter a new name for the copied subtype if necessary.
Confirm the action and the system will create a copy of the selected sub-type with all existing settings, fields, layouts and scripts.
Navigate to the subtype management settings and select the subtype you want to delete.
Click the trash can icon on the right of the action menu.
Confirm the deletion action by accepting a confirmation message if prompted.
Note that deleting a subtype can irreversibly remove all documents and data associated with it. Make sure you take all necessary security precautions and check that the subtype is no longer needed before deleting it.
Proper use of these actions allows you to streamline sub-type management. Copying allows you to leverage existing configurations for new sub-types, while deleting allows for efficient cleanup of sub-types that are no longer needed. However, it is important to be careful when deleting to avoid data loss. \
Here are detailed steps to add a new column:
Requirements analysis:
Review your application's requirements and identify the purpose of the new column. What type of data will be stored? How will this column be used in the application?
Choosing the right column type:
Choose the most appropriate column type based on the data that will be stored in the column. This can be AMOUNT for amount, STRING for strings, DATE for dates, etc.
Choosing the right column type is important to ensure data integrity and use storage space efficiently.
Choosing the right table:
To select the correct column type in a particular table, such as the invoice table, it is important to consider the specific requirements of the data to be stored in that table.
Deciding on column necessity:
Consider whether the new column is required or whether it should allow NULL values. If the column is mandatory, it should be marked as NOT NULL to ensure that important data is not missing.
Also consider whether the column may become a required field for your application in the future.
Database backup:
Before adding the new column, make a backup of your database to ensure that you have a working version to fall back on in case of any issues.
Executing the SQL statement:
Use the ALTER TABLE SQL statement to add the new column. The exact syntax depends on the database platform you are using, but in general the SQL statement looks like this:
Replace table_name with the name of your table, new_column_name with the name of the new column, and data_type with the column type you selected. The [NOT NULL] keyword indicates whether the column is mandatory.
Testing and validating:
After the new column is added, thoroughly verify that your application is working properly. Run tests to ensure that data is stored and retrieved correctly and that the new column is working as expected.
By carefully following these steps, you can successfully and effectively add a new column to your database table, choosing the correct column type and ensuring that the column is required when it is required.
Configuring subtypes allows you to customize the structure and behavior of the documents within a specific type. Here is an explanation of how you can use the "Fields", "Edit Layout", and "Scripts" options to customize each subtype to specific needs:
Fields: The "Fields" option allows you to add, edit or remove custom metadata fields for the subtype. These fields can contain information about the documents of that type such as title, author, date, category, etc. You can use different field types such as text boxes, dropdown lists, date values, etc. to capture the data according to your requirements.
Edit Layout: The Edit Layout option allows you to customize the appearance and arrangement of fields on the user interface. You can change the order of fields, create groups of fields to group related information, and adjust the size and position of fields on the page. This allows you to optimize the user experience and improve usability.
Scripts: The "Scripts" option allows you to add custom logic or automation for the subtype. You can use scripts to trigger specific actions when a document of this type is created, edited or deleted. This can be useful for implementing complex business rules, performing validations or integrating external systems.
De Tabelkolommen-interface in Docbits wordt gebruikt om de kolommen te specificeren die verschijnen in datatabellen voor elk documenttype. Elke kolom kan worden geconfigureerd om specifieke soorten gegevens vast te houden, zoals strings of numerieke waarden, en kan essentieel zijn voor sorteer-, filter- en rapportagefuncties binnen Docbits.
Kolomconfiguratie:
Kolomnaam: De identificator voor de kolom in de database.
Titel: De leesbare titel voor de kolom die in de interface zal verschijnen.
Kolomtype: Bepaalt het gegevenstype van de kolom (bijv. STRING, BEDRAG), wat bepaalt welk soort gegevens in de kolom kan worden opgeslagen.
Tabelnaam: Geeft aan bij welke tabel de kolom hoort, en koppelt deze aan een specifiek documenttype zoals INVOICE_TABLE.
Acties:
Bewerken: Wijzig de instellingen van een bestaande kolom.
Verwijderen: Verwijder de kolom uit de tabel, wat nuttig is als de gegevens niet langer nodig zijn of als de gegevensstructuur van het documenttype verandert.
Nieuwe Kolommen en Tabellen Toevoegen:
Nieuwe Tabelkolom Toevoegen: Opent een dialoogvenster waarin je een nieuwe kolom kunt definiëren, inclusief de naam, of deze verplicht is, het gegevenstype en de tabel waartoe deze behoort.
Nieuwe Tabel Maken: Maakt het mogelijk om een nieuwe tabel te creëren, waarbij een unieke naam wordt gedefinieerd die zal worden gebruikt om gegevens op te slaan die verband houden met een specifieke set documenttypes.
Deze sectie is van vitaal belang voor het behouden van de structurele integriteit en bruikbaarheid van gegevens binnen het Docbits-systeem, en zorgt ervoor dat de gegevens die uit documenten zijn gehaald op een goed georganiseerde en toegankelijke manier worden opgeslagen.
Here are some reasons why this is important:
Data Integrity:
Proper configuration of fields ensures that the data entered into the system is correct and meets the required standards.
This helps to avoid errors and inaccuracies that could lead to incorrect analysis or decisions.
Data Consistency:
Consistent field configuration ensures that data is captured in a uniform manner, making it easier to compare and analyze.
For example, if a field for date inputs is incorrectly configured to allow different date formats, this can lead to confusion and inconsistencies.
Data Validation:
Configuring fields allows validation rules to be set to ensure that only valid data can be entered. This helps to detect errors early and improve data quality.
Data processing efficiency:
Accurate configuration of fields enables efficient data processing as systems are better able to understand and process the data. This improves efficiency in data extraction, transformation, and loading (ETL).
Data security:
Proper configuration of fields can also help ensure the security of data, for example by encrypting or masking sensitive information.
Overall, accurate configuration of fields in DocBits is critical to ensure data quality, consistency, integrity, and security. It helps organizations make informed decisions by accessing reliable and accurate data.
Editing and deleting columns in a database table are important operations that must be performed carefully to ensure data integrity and consider potential impacts on application logic and reporting.
Here are detailed steps for both actions:
Change title:
Click on the title of the column you want to change, a window will open and you can change the title of the column.
Requirement analysis:
Identify the reason for editing the column. You may need to change the data type, add or remove constraints, or change the column name.
Impact review:
Before making any changes, review how they will affect existing data and application logic. For example, changes to the data type may cause data to be converted or lost.
Database backup:
Back up your database to ensure you have a working version to revert to in case of any problems.
Executing the SQL statement:
Use the ALTER TABLE SQL statement to make the desired changes to the column. The exact syntax depends on the database platform you are using and the changes you want to make.
Data migration:
If you change the data type of a column, you may need to perform data migration to convert existing data to the new format.
Testing and validating:
After editing the column, thoroughly verify that your application is working properly and that the data is being stored and retrieved correctly.
Requirement analysis:
Make sure you understand the reasons for deleting the column. Is the column no longer relevant or are there other ways to consolidate it?
Impact review:
Analyze how deleting the column will affect existing data, application logic, and reporting. This may result in data loss or affect queries and reports.
Database backup:
Make a full backup of your database to ensure you can restore in case of unexpected problems.
Executing the SQL statement:
Use the ALTER TABLE SQL statement to remove the column. The exact syntax varies by database platform.
Data migration (if required):
If you have important data in the column you are deleting, you may need to perform a data migration to move that data to another location or delete it.
Adjusting application logic:
Make sure your application logic is adjusted accordingly to ensure it no longer accesses the deleted column.
Testing and validating:
Verify thoroughly that your application is working correctly and that all data and reporting functions are working as expected.
When editing or deleting columns, it is critical that you fully understand the impact of these actions and take appropriate precautions to maintain the integrity of your database and ensure that your application runs smoothly.
De instellingen voor Velden bieden een gebruikersinterface waar beheerders de eigenschappen en het gedrag van individuele gegevensvelden die aan een documenttype zijn gekoppeld, kunnen beheren. Elk veld kan worden aangepast om de nauwkeurigheid en efficiëntie van gegevensinvoer en validatie te optimaliseren.
Configuratie van Velden:
Veldnamen: Lijst met de namen van de velden, die doorgaans overeenkomen met de gegevenselementen binnen het document, zoals "Factuurnummer" of "Inkoopordernummer".
Verplicht: Beheerders kunnen velden als verplicht markeren, zodat ervoor gezorgd wordt dat gegevens moeten worden ingevoerd of vastgelegd voor deze velden om de documentverwerking te voltooien.
Alleen-lezen: Velden kunnen als alleen-lezen worden ingesteld om wijziging na gegevensinvoer of tijdens bepaalde fasen van documentverwerking te voorkomen.
Verborgen: Velden kunnen uit het zicht in de gebruikersinterface worden verborgen, nuttig voor gevoelige informatie of om gebruikersworkflows te vereenvoudigen.
Geavanceerde Instellingen:
Dwing Validatie af: Zorgt ervoor dat gegevens die in een veld worden ingevoerd voldoen aan bepaalde validatieregels voordat ze worden geaccepteerd.
OCR (Optische Karakterherkenning): Deze schakelaar kan worden ingeschakeld om OCR-verwerking voor een specifiek veld mogelijk te maken, nuttig voor geautomatiseerde gegevensextractie uit gescande of digitale documenten.
Overeenkomstscores: Beheerders kunnen een overeenkomstscores definiëren, een drempel die wordt gebruikt om het vertrouwensniveau van gegevensherkenning of -overeenstemming te bepalen, wat van invloed is op hoe gegevensvalidatie en kwaliteitscontroles worden uitgevoerd.
Actieknoppen:
Nieuw Veld Aanmaken: Maakt het mogelijk om nieuwe velden aan het documenttype toe te voegen.
Bewerkpictogrammen: Elk veld heeft een bewerkpictogram waarmee beheerders de veldspecifieke instellingen verder kunnen configureren, zoals gegevenstype, standaardwaarden of verbonden bedrijfslogica.
Instellingen Opslaan: Bevestigt de aangebrachte wijzigingen in de veldconfiguraties.
If a field is marked as Required, it means that this field must be filled in before the document can be saved or processed.
To set this property:
Navigate to the field's settings in your DocBits system.
Enable the Required option for the relevant field.
Impact:
This setting ensures that important information is captured and that no documents can be processed without the required data.
If a field is marked as Read Only, it means that users can view the contents of this field, but cannot make any changes to it.
To set this property:
Go to the Field Options. Enable the Read Only option for the desired field.
Impact:
This setting can be useful to protect sensitive information or to ensure that important data is not accidentally changed.
If a field is marked as "Hidden", it means that the field will be hidden in the user interface and users will not be able to see or access it.
To set this property:
Go to the field options.
Enable the "Hidden" option for the corresponding field.
Impact:
This setting is often used to hide internal or technical fields that are irrelevant to the end user or are only needed for internal processing.
If a field is configured for OCR, it means that the system will try to extract the text from the document and insert it into this field. This setting is usually used for fields that are intended to be auto-filled.
To set this up:
Enable the OCR option for the corresponding field.
If necessary, configure the OCR parameters such as language, font, etc.
Impact:
Using OCR allows documents to be processed automatically by extracting information from texts and entering it into the appropriate fields, reducing manual effort and increasing efficiency.
Configure the validation rules accordingly, such as numeric limits, regular expressions, or relationships with other fields.
To set this up:
Save the changes.
Impact:
Forced validation checks the entered data against the specified criteria to ensure it is valid. This helps to detect errors early and improve data quality.
By comparing input data with reference data, the Match Score can help confirm the accuracy and validity of the data. If the Match Score exceeds a certain threshold, the match is considered successful.
To set this up:
Enable the Match Score option and set the desired threshold.
Save the changes.
Impact:
The Match Score is used to evaluate the accuracy of matches between input data and reference values. If the score obtained exceeds the set threshold, the match is considered successful. This is especially useful for fields that require data validation or data matching, such as fields with a name, email address, or e-mail address. B. when checking customer data.
By carefully configuring these field properties, you can optimize document processing workflows and ensure that your data is correctly captured, protected, and processed efficiently.
Analyze your document workflow thoroughly to identify the different phases and steps a document goes through, from capture to processing to storage or release.
Identify the specific data that needs to be captured, reviewed, processed, or extracted at each step of the workflow.
Determine the key data that is critical to your business process or analysis.
Prioritize fields according to their importance to the business process or analysis to ensure they are captured and processed correctly.
Match field properties to specific data requirements, including their type (text, date, numeric, etc.), validation rules, and any required properties such as required or read-only.
Also consider security requirements, privacy regulations, and legal requirements when configuring field properties.
Design the fields to be flexible and extensible to accommodate future customizations or changes in document workflow or data requirements.
Make sure the configuration of the fields allows new data points or changed requirements to be easily and efficiently incorporated.
Perform extensive testing to ensure that the configured fields work correctly and produce the expected results.
Validate the field configuration by processing a large number of documents and verifying that the data captured meets the requirements.
By understanding the document workflow and data requirements and applying best practices in field configuration, you can ensure that your document processing system functions efficiently and accurately. This will help improve the quality of your data, optimize workflow, and increase the overall performance of your business.
Here is advice for troubleshooting common problems in a document processing system, including fields not capturing data correctly, OCR errors, and validation rule issues:
Check the configuration of the field in question to ensure the correct field type is being used and that all required properties are set correctly.
Make sure users have the correct instructions to enter data correctly into the field, and provide training or guidelines if necessary.
If the problem persists, run tests to verify whether the problem is systemic or only occurs with certain inputs. This can help you more accurately determine the cause of the problem.
Check the quality of the scanned documents, including the readability of the text and any distortion or blurring.
Adjust the OCR settings, including the language, text recognition algorithm, and other parameters, to improve accuracy. Perform OCR preview or test runs to check the performance of the OCR system and identify potential sources of errors.
If OCR errors persist, you may want to consider implementing an advanced OCR system or looking into external OCR services.
Review the configuration of validation rules to make sure they are set up correctly and meet the desired criteria.
Make sure validation rules are not too restrictive and that they carefully consider the actual data.
Run tests to make sure validation rules work as expected and check that they respond appropriately to unexpected data or edge cases.
Provide users with guidance and error messages to alert them to any validation errors and help them enter the correct data.
By systematically reviewing and troubleshooting these common issues, you can improve the performance and accuracy of your document processing system and ensure that it runs smoothly and efficiently.
Here are instructions for using the Force Validation and Match Value settings to improve data integrity and recognition accuracy in a document processing system:
This setting allows you to set rules that check whether the data entered meets certain criteria.
To set this up:
Go to the settings of the field in question.
Enable the Force Validation option.
Define the validation rules to check. These can be, for example, numeric limits, regular expressions for text fields, or relationships to other fields.
Impact:
Enforcing validation rules helps detect errors early and improves data quality. Users are prompted to enter correct data, which increases the integrity of the database.
This setting allows you to match the entered value against a predefined reference value.
To set this up:
Navigate to the field's settings.
Enable the Match Value option.
Enter the reference value to compare the entered value against.
Impact:
Setting a match value allows you to ensure that the entered data matches a known standard or predefined norm. This is especially useful when you want to ensure that the data is consistent and meets certain criteria.
Using these settings can improve data integrity and recognition accuracy in your document processing system.
You ensure that only correct and valid data is captured, increasing the quality of your database and improving the reliability of your analytics and reports.
1. Navigate to Fields:
• From the main dashboard, click on the “Fields” option located in the sidebar.
• This will open the Field Settings page where you can manage document types and recognition settings.
2. Select Document Type:
• Under the “DOCUMENT TYPES” section, select the specific document type you wish to add or modify a field for.
1. Open the Add Field Dialog:
• Click on the “Create Field” button located in the respective section (e.g., Invoice Details, Payment Details, Purchase Order).
2. Enter Field Details:
• A dialog box titled “Document field details” will appear.
• Fill in the required details:
• Name: Enter the name of the new field.
• Title: Enter a descriptive title for the field.
• Select field type: Choose the appropriate field type from the dropdown menu.
3. Enable Charges Element:
• If this field is associated with a charge, check the “Enable charges element” box.
4. Select Costing Mapping:
• Upon enabling charges, a dropdown menu will appear.
• Select the appropriate charge type from the list (e.g., ADD ON - ORIGIN, FUEL SURCHARGE, TOTAL FREIGHT).
1. Save Settings:
• Click the “SAVE” button to add the new field with the specified charge mapping.
• If you need to make changes, click the “BACK” button to return to the previous screen.
2. Finalize Field Creation:
• After saving, the new field will appear in the list under the specified document type.
• Ensure that the OCR and Match Score settings are configured as needed for accurate recognition.
3. Complete the Setup:
• Once all desired fields are added and configured, click the “Save Settings” button at the bottom of the Field Settings page to apply your changes.
• Required Fields:
• If a field is mandatory, check the “REQUIRED” box next to the field name.
• Editing Existing Fields:
• To edit an existing field, click on the field name, update the details, and save the changes.
• Reassign Field Group:
• Use the “Reassign Field Group” option to change the grouping of fields if necessary.
• Master Data Settings:
• For advanced configuration, access the “Master Data Settings” to manage overall field and document type settings.
Log in and access Document Management:
Add a new field:
Click the "Create Field" option.
Basic Configuration:
Enter a name for the field and set other basic properties, such as whether it is required or whether it should be visible when editing.
Select Field Type:
Select the type of new field, such as text, date, dropdown, checkbox, etc.
Advanced Configuration:
Here you can set other properties such as validation rules, match values, read-only mode, hiding, and OCR settings.
Save:
After you have made all the necessary configurations, click "Save" or a similar button to create the new field.
Access field editing:
Navigate to the list of existing fields and find the field you want to edit.
Select a field:
Click the appropriate field to access the editing options.
Change the configuration:
Change the properties of the field as needed. This may include changing the name, the field type, adding or removing validation rules, setting match values, or adjusting other advanced settings.
Save:
Click "Save" to save the changes you made to the field.
By providing steps to add new fields and edit existing fields, as well as access to advanced configuration options, you can maximize the flexibility and adaptability of your document processing system. This allows you to structure and process your data exactly as you need it.
Modeltraining stelt beheerders in staat om het trainen van machine learning-modellen die specifiek zijn voor elk documenttype te overzien en te beheren. Door een gestructureerde interface te bieden voor het importeren van voorbeeldgegevens, het trainen van modellen en het testen van hun prestaties, zorgt Docbits ervoor dat de gegevensextractiecapaciteiten in de loop van de tijd continu verbeteren.
Metrics Overzicht:
Voorbeeld: Aantal voorbeelddocumenten dat is gebruikt voor training.
Geëxporteerd: Aantal documenten dat succesvol is geëxporteerd na verwerking.
Bedrijf Σ: Totaal aantal bedrijfsspecifieke documenten dat is verwerkt.
Totaal Σ: Totaal aantal documenten dat is verwerkt over alle categorieën.
Trainings- en Testopties:
Importeren: Stelt beheerders in staat om nieuwe trainingsdatasets te importeren, die doorgaans gestructureerde voorbeelden zijn van documenten die door het systeem herkend moeten worden.
Model Trainen: Start het trainingsproces met behulp van de geïmporteerde gegevens om de herkennings- en extractiecapaciteiten van het systeem te verbeteren.
Testclassificatie: Maakt het mogelijk om het model te testen om de prestaties bij het classificeren en extraheren van gegevens uit nieuwe of ongeziene documenten te evalueren.
Actieknoppen:
Veld Aanmaken: Voeg nieuwe gegevensvelden toe die het model moet herkennen en extraheren.
Acties: Deze dropdown kan opties bevatten zoals details bekijken, configuraties bewerken of trainingsgegevens verwijderen.
Here are some best practices for continuous model training:
Perform regular training:
Perform regular training cycles to ensure your model is up to date and adapts to changes in data and requirements.
The frequency of training can vary depending on the type of data and training progress, but it is important to train regularly to maintain model performance.
Use updated sample documents:
Use recent sample documents that are representative of the data your model will face.
This may include adding new documents, removing outdated documents, or editing metadata to ensure the training data is current and relevant.
Select diverse samples:
Make sure your training data covers a wide variety of scenarios and use cases to ensure the model is robust and versatile.
Consider different variations in layouts, languages, formats, and content to ensure the model works well in different situations.
Monitor model performance:
Regularly monitor the performance of the model using relevant metrics such as accuracy, precision, and recall.
Analyze the results of classification tests and validation checks to identify weak points and spot opportunities for improvement.
Incorporate continuous feedback:
Incorporate feedback from users and experts to continuously improve the model.
Collect feedback on misclassifications or inadequate results and use this information to adjust and optimize the model.
Automate the training process:
Automate the training process to increase efficiency and minimize human error.
Use tools and scripts to automatically perform model training, evaluation, and updating when new data is available or changes are required.
By implementing these best practices for continuous model training, you can ensure that your model is constantly improving and achieving optimal performance to meet the needs of your use case.
Regular model training is critical to ensure that a document processing system continues to work effectively and accurately as document formats and content change.
Here are some key reasons for regular model training:
Adaptation to new formats:
Documents are often created in different formats, be it PDF, Word, Excel, or others.
New versions of these formats may have additional features or changes in formatting that the processing system may not recognize unless it is updated accordingly.
By regularly training the model, the system can adapt to these new formats to ensure smooth processing.
Adaptation to changing content:
The content of documents can change over time, be it due to updates to business processes, changes in policies, or new industry standards.
Regular training allows the processing system to adapt to these changes and continue to deliver accurate results.
Optimizing accuracy:
By training the model with new data, algorithms and models can be continuously improved to increase the accuracy of document processing.
This is especially important in areas where precision and reliability are critical, such as processing financial documents or medical records.
Handling exceptions:
Regular model training allows the system to better identify and handle exceptions and boundary conditions.
This can help reduce errors and improve overall system performance.
Ensuring compliance:
In industries with strict compliance requirements, it is important that the document processing system is always up to date to meet legal requirements.
Regular training and updating of the model can help ensure the system complies with current standards.
Overall, regular model training is an essential component to the effectiveness and reliability of a document processing system. It allows the system to continuously adapt to changing requirements and deliver accurate results, which in turn improves efficiency and productivity.
To effectively manage training data, you can take the following steps:
Adding new records:
Collect new documents to serve as training data for your model.
Make sure these documents are a representative sample of the different types of data the model is designed to process.
Upload the new records to your training data repository.
Editing existing records:
Regularly review your existing training data and update it as needed. This may include editing document metadata, adding additional labels, or removing erroneous or non-representative records.
Removing records:
Identify outdated, inaccurate, or no longer relevant records and remove them from your training data set.
Make sure you have a clear process for deciding which records to remove and document that process.
Training data versioning:
Implement a version control system for your training data to track changes and keep a clear history of dataset changes. This allows you to restore older versions of the training data when needed and track changes.
Training data security:
Ensure your training data is appropriately protected, especially if it contains sensitive or confidential information. Implement access controls to ensure only authorized users can access the training data, and encrypt the data during transfer and storage.
Documentation and tracking:
Document all changes to your training data, including adding, editing, and removing datasets. This allows you to track the history of your training data and ensure you have current and relevant data for training your model.
By regularly managing and updating your training data, you can ensure that your model is trained with current and representative data and achieves optimal performance.
Here are solutions to some typical problems that can arise during model training:
Data format errors:
Make sure the training data is in the correct format and meets the model's requirements.
Check the data for missing values, incorrect encodings, or unexpected structures.
If necessary, convert the data to the correct format and perform preprocessing to ensure it is suitable for training.
Training model convergence issues:
If the model is struggling to converge or show consistent improvements, check the hyperparameters and training configurations.
Experiment with different learning rates, batch sizes, or optimization algorithms to facilitate convergence.
If necessary, reduce the model complexity or increase the amount of training data to improve model performance.
Unexpected model performance degradation:
If the model shows unexpectedly poor performance after training, check the training data for possible errors or inaccuracies.
Analyze the error patterns and check if certain classes or features are classified poorly.
Run further tests with new training data to ensure that the model is consistent and reliable.
Overfitting or underfitting:
Monitor model performance for overfitting or underfitting, which can lead to poor generalization ability.
Experiment with regularization techniques such as L2 regularization or dropout to reduce overfitting.
Increase the amount of training data or data variation to avoid underfitting and improve model performance.
Lack of representativeness of training data:
Make sure your training data covers a sufficient variety of scenarios and use cases to prepare the model for different situations.
If necessary, supplement the training data with additional examples or synthetic data to improve coverage and increase model performance.
By identifying and fixing these issues specifically, you can improve the performance of your model and ensure that it works effectively and reliably to meet the needs of your use case.
Regex, short for "Regular Expressions", is an extremely powerful method for pattern recognition in texts. It allows you to search for specific strings or patterns within texts, offering a high level of flexibility and precision.
In terms of data extraction from structured text formats such as documents, Regex plays a crucial role for several reasons:
Precise pattern recognition:
Regex allows you to define precise patterns for which text should be searched.
This is particularly useful when the data to be extracted follows a specific format or structure.
Flexible adaptation:
Since Regex offers a wide range of operators and constructs, complex patterns can be defined to extract data in different formats and variants.
This allows for flexible adaptation to different document structures.
Efficient processing:
Regex enables efficient processing of large amounts of text, since pattern searches are usually quick and even large text documents can be searched in an acceptable time.
Automation:
Regex can be used in scripts and programs to automate the extraction process.
This is especially useful when large volumes of documents need to be processed as extraction manually would be time-consuming and error-prone.
Validation and Cleansing:
Apart from extracting data, Regex also allows validation and cleaning of texts.
By defining patterns, unwanted strings can be identified and removed, resulting in cleaner and more consistent data.
Overall, using Regex provides an effective way to analyze structured text formats and extract data accurately and efficiently, which in turn is of great use for various applications such as data analysis, text processing, information extraction, and machine learning.
When using regex for document processing, there are some best practices to keep in mind to create and maintain effective and maintainable patterns:
Keep patterns simple and readable:
Complexity is often the enemy of maintainability.
It is advisable to keep regex patterns as simple and clear as possible.
Avoid overly complex expressions that are difficult to understand and use comments to explain how the pattern works.
Test patterns thoroughly before deployment:
Before deploying regex patterns in a production environment, thorough testing is essential.
Use test data that covers a wide range of possible scenarios and carefully review the results.
Also be aware of edge cases and unexpected variations in the data.
Document regex patterns for ongoing maintenance:
Good documentation is critical to ensuring the maintainability of regex patterns.
Describe how the pattern works, its purposes, and potential limitations.
Also, make notes about changes and updates to help other developers understand and maintain the patterns.
Promote modularity:
Break complex regex patterns into smaller, more easily understood parts.
This promotes reusability and makes maintenance easier.
Use named groups and user-defined functions to make your pattern more modular.
Performance optimization:
When processing large amounts of data, performance is an important factor.
Optimize your regex patterns to maximize processing speed.
For example, avoid excessive use of greedy quantifiers and inefficient constructs.
Regular review and update:
Review your regex patterns regularly for updates and improvements.
New requirements and changing data formats may require changes to the patterns.
Also update the documentation accordingly.
By following these best practices, you can ensure that your regex patterns are robust, efficient and maintainable, which in turn improves the reliability and scalability of your document processing solution.
Define the goal:
First, clarify what type of data you want to extract and in what context it occurs.
Understand the structure and format of the data you want to capture.
Identify the pattern:
Analyze sample data to identify patterns or structures that are characteristic of the data you want to extract, keeping in mind possible variations and edge cases.
Use Regex Operators:
Choose the appropriate Regex operators and constructs to describe the identified patterns.
These include metacharacters such as
´.´ (any character), ´*´ (any number), ´+´ (at least one occurrence), ´? (zero or one occurrence) and character classes such as ´\d´ (digital character), ´\w´ (alphanumeric character) and ´\ (space).
Test the pattern:
Use test data to make sure your regex pattern correctly captures the desired data while taking into account possible edge cases.
Use online regex testers or special software tools to do this.
Optimize the pattern:
Check your regex pattern and optimize it if necessary to make it more precise and efficient.
For example, avoid patterns that are too general and could return too many unwanted matches.
Document the pattern:
Document your regex pattern, including its purposes, how it works and possible limitations.
This will make it easier for other developers to use and understand the pattern.
Implement the pattern:
Integrate your regex pattern into your application or script to extract and further process the desired data.
Use groupings '( )' to define subpatterns and control their repetition.
Consider special cases and constraints in your pattern.
Be specific but not too restrictive to capture variations of the expected data.
Be case sensitive when relevant and use the i modifier for case independence when appropriate.
Experiment with your pattern and check the results regularly to make sure it is working correctly.
Use online regex testers:
Online regex testers are useful tools to check your regex patterns with test data and visualize the behavior of the pattern. They allow you to step through the matching process and identify potential problems.
Check the data context:
Make sure you understand the context of the data your regex pattern is working with. Sometimes unexpected characters or structures in the text can cause the pattern to not work as expected.
Check greedy quantifiers:
Greedy quantifiers like * and + can cause the pattern to capture too many characters and thus produce unexpected matches. Use greedy quantifiers with caution and check that the matching process is working as expected.
Debugging with grouping:
Use groupings ( ) to isolate subsections of your regex pattern and check their match separately. This allows you to understand which parts of the pattern might be causing problems.
Watch for special characters:
Some characters in regex have special meanings and need to be escaped if they are to be treated as normal characters. Make sure you use the correct escape characters to avoid unexpected results.
Test with different datasets:
Use a variety of test data to make sure your regex pattern works correctly in different scenarios. This includes typical datasets as well as edge cases and unexpected variations.
Consult the documentation:
Check the documentation of your regex implementation to make sure you understand the specific properties and peculiarities of the regex syntax used. Sometimes nuances in the syntax can lead to unexpected behavior.
Seek community support:
If you continue to have problems with your regex pattern, you can seek support in developer forums or Q&A platforms. Other developers may be able to offer helpful insights or solutions.
By following these tips and working systematically, you can identify and fix most common regex pattern issues to ensure reliable data extraction.
Keep your scripts modular and well-structured.
Break complex tasks into smaller, more manageable modules.
Not only does this make your scripts easier to maintain and update, but it also allows you to reuse code and improve readability.
Implement robust error handling in your scripts.
Make sure your code detects error cases and responds appropriately, whether by catching and logging errors, issuing helpful error messages, or taking action to recover.
Document the purpose, functions, and usage of each script in detail.
Describe what tasks the script performs, what inputs it expects, what outputs it generates, and how it integrates with the document processing workflow.
Clear documentation makes it easier for other developers to understand and maintain your scripts.
Comment your code thoroughly to explain its functionality and logic.
Use meaningful comments to explain important steps or complex parts of the code.
This not only makes the code easier for others to understand, but also for yourself when making future changes or updates.
Implement an effective version control system for your scripts.
This allows you to track changes, manage different versions, and revert to previous versions when needed.
This is especially useful when multiple developers are working on the same scripts or when you want to test different iterations.
Make sure your scripts are secure and free of potential security vulnerabilities.
For example, avoid unsafe practices such as directly executing user input or storing sensitive information in plain text.
Instead, implement security best practices and regularly audit your code for security vulnerabilities.
By following these best practices for script development in DocBits, you can create efficient, reliable, and well-documented scripts that improve the functionality and security of your document processing workflow.
Scripts can contain errors, whether due to syntax errors, logic errors, or unforeseen edge cases.
Testing in a controlled environment allows these errors to be identified and fixed before the script is deployed in a live workflow.
This helps avoid potential issues and downtime.
In a live workflow, using faulty scripts can result in data loss or data corruption, which can lead to serious security issues.
Testing in a controlled environment allows potential security vulnerabilities to be identified and fixed before sensitive data is affected.
Scripts are designed to automate specific tasks or improve processes.
Thorough testing helps you ensure that the script performs the desired functions properly and produces the intended results.
This helps improve the efficiency and quality of document processing.
A controlled test environment allows you to test the script under different conditions and ensure that it works stably in different environments.
This is especially important when the script is deployed in different system configurations or with different data sets.
Testing in a controlled environment also allows you to check the usability of the script and ensure that it is easy to use and understand.
Feedback from the testing process allows you to tweak the script if necessary to improve the user experience.
Overall, thoroughly testing scripts in a controlled environment helps ensure the reliability, security, and effectiveness of document processing. It is an indispensable step to identify potential issues and ensure that the script works optimally before deploying it in a live workflow.
Use debugging tools:
Use debuggers or logging frameworks to trace the flow of your script and identify potential sources of errors.
Step-by-step execution:
Run your script step-by-step and check after each step that the expected behavior occurs.
This can help you pinpoint the exact time and cause of an error.
Print intermediate results:
Include targeted output from variables or intermediate results in your script to check the state of the code and understand what is happening.
Isolate the problem:
Try to isolate the problem to a specific place in the code or a specific input data to find the source of the unexpected behavior.
Check external dependencies:
Make sure that external resources or libraries are installed and configured correctly and that your script can access them properly.
Check changes:
If the unexpected behavior occurs after a code change, review your recent changes and consider whether they might have caused the problem.
Identify bottlenecks:
Analyze your script to identify bottlenecks or inefficient areas that might affect performance.
Optimize critical sections:
Review critical sections of your code and look for ways to optimize them, such as using more efficient algorithms or data structures.
Consider scaling:
Think about the scaling of your scripts and how they behave as the load increases.
Test your script under different load conditions to make sure it works efficiently even under heavy use.
Document your troubleshooting steps:
Keep track of the steps you took to diagnose and resolve issues.
This can help you identify and resolve similar issues more quickly in the future.
Seek resources and expertise:
Use online resources, forums, or the documentation of the scripting language you are using to get help with troubleshooting.
Sharing experiences with other developers can also be helpful.
Applying these tips will help you more effectively diagnose and resolve common scripting issues in DocBits and optimize the performance of your scripts.
EDI settings, short for Electronic Data Interchange, play a crucial role in electronic communication between business systems. EDI enables the automated exchange of business documents and data between different companies without the need for manual intervention. The importance of EDI lies primarily in improving the efficiency, accuracy and speed of data transfer, which leads to optimization of business processes.
In supply chain management, EDI settings enable seamless communication between suppliers, manufacturers, distributors and retailers. Purchase orders, shipping advices, invoices and other important documents can be automatically exchanged between the parties involved, resulting in improved inventory management, reduced delivery times and an overall more efficient supply chain.
In purchasing, EDI settings enable the automated exchange of purchase orders and order confirmations between companies and their suppliers. This shortens processing times, minimizes errors and makes it easier to track orders.
In finance, EDI settings enable the electronic exchange of invoices, payment advices and other financial documents between companies and their business partners. This speeds up the payment process, reduces the need for manual intervention and promotes accuracy in financial transactions.
Overall, EDI settings contribute significantly to improving efficiency, accuracy and transparency in various areas of business operations and are therefore an integral part of modern business practices.
Guide to using the XSLT Editor to create or modify transformations. Includes tips for testing and validating XSLT scripts to ensure they correctly transform document data into the required EDI format.
Opening the XSLT Editor:
Launch the XSLT editor of your choice. Popular options include Oxygen XML Editor, Altova XMLSpy, or simply a text editor with syntax highlighting for XSLT.
Creating or modifying transformations:
Define the rules for transforming the input data (e.g. XML) into the desired EDI format. Use XSLT templates to select the elements and attributes of the input XML and format them accordingly.
Use XSLT functions and statements such as xsl:template, xsl:apply-templates, xsl:for-each´, xsl:value-of, etc. to perform the transformations.
Check your transformations carefully to ensure that all required data is extracted and formatted correctly.
Testing and validating XSLT scripts:
Use sample data to test your XSLT transformations. Ideally, this sample data should cover various scenarios and edge cases that may occur in the actual input data.
Run your XSLT scripts with the sample data and carefully check the output result. Make sure that the generated EDI output meets the expected specifications.
Validate your XSLT scripts against the XSLT specification to ensure they are syntactically correct and do not contain errors that could cause unexpected behavior.
Use tools such as XSLT debuggers to perform step-by-step testing when necessary and identify potential problems in your transformations.
By carefully creating, testing, and validating your XSLT scripts, you can ensure that they correctly transform the input data into the required EDI format. This is critical for successful electronic data interaction between different business systems.
In the XRechnung administration panel, you will encounter the following key components:
The Transformation process is essential for converting raw data, usually in XML format, into a structured format that meets specific requirements, like generating an invoice. In XRechnung, this is primarily achieved using XSLT (Extensible Stylesheet Language Transformations). XSLT is a language designed for transforming XML documents into other types of documents, like another XML, HTML, or plain text.
• XSLT Template: The XSLT file defines how the XML data is processed and what the final output should look like. It applies rules and templates to extract, manipulate, and output the data from the XML document.
• Elements and Attributes: The XSLT file contains specific elements and attributes that control the transformation process. For instance, <xsl:value-of> is used to extract the value of a specific node from the XML document.
• Modifying the XSLT:
• Edit Existing Templates: An admin can modify the existing XSLT templates to change how the input XML data is transformed. For example, if there’s a need to extract additional information from the XML document, an admin could add new rules in the XSLT file.
• Create New Versions: If changes are required, an admin can create a new version of the XSLT template. This ensures that previous versions remain intact for historical reference or rollback if needed.
Suppose the XSLT template extracts the invoice ID using:
If a new field, such as a customer reference number, needs to be extracted, an admin might add:
The Preview function allows admins to view the output generated by the XSLT transformation before finalizing it. This step is crucial for ensuring that the transformation rules work correctly and that the output meets the required standards.
• Real-Time Validation: The preview feature provides a real-time rendering of how the transformed data will look when applied to an actual document (like an invoice). This helps in catching errors or formatting issues early.
• Adjustments: If the preview shows discrepancies or errors, adjustments can be made directly in the transformation (XSLT) file.
• Customizing the Preview:
• Modify Preview Settings: An admin can adjust which parts of the transformation are previewed. For instance, they might focus on specific sections of the document or test new rules added to the XSLT template.
• Save and Iterate: After making adjustments, the preview can be refreshed to see the changes. This iterative process allows fine-tuning until the desired output is achieved.
If an admin notices that the date format in the preview is incorrect (e.g., showing YYYY-MM-DD instead of DD-MM-YYYY), they can modify the XSLT to format the date correctly and immediately see the result in the preview.
Extraction Paths define the specific paths within an XML or JSON structure from which data should be extracted. This process is essential for isolating key pieces of information within the document that will be used in the transformation or for other processing tasks.
• XPath and JSONPath: Extraction paths use languages like XPath (for XML) or JSONPath (for JSON) to specify the location of the data within the document. These paths are crucial in telling the system exactly where to find and how to extract the required information.
• Defining and Modifying Paths:
• Modify Existing Paths: An admin can modify the extraction paths if the data structure changes or if additional data needs to be extracted. This might involve changing the XPath or JSONPath expressions.
• Add New Paths: For new fields or data points, an admin can define new extraction paths. This would involve specifying the correct path in the XML or JSON document.
In an XML invoice document, if the path to the invoice ID is defined as:
And a new field, such as a shipping address, needs to be added, an admin might add:
Data formatting errors:
Carefully review the EDI structure and format of your messages to ensure they comply with standards and specifications.
Validate data fields for correct syntax and formatting according to agreed standards such as ANSI X12 or EDIFACT.
Make sure the transformations and templates used are correctly configured to properly format and interpret the data.
Partner compatibility issues:
Review your business partner's configurations and specifications to ensure they match your own.
Communicate with your partner to identify any discrepancies or incompatibilities and work together to find solutions.
Implement adjustments in your EDI configurations if necessary to improve compatibility with your partner.
Handling transmission errors:
Monitor your EDI transmissions regularly to identify potential errors or failures early.
Implement mechanisms for error detection and remediation, such as automated notifications of transmission errors or setting up retry mechanisms for failed transmissions.
Perform regular tests of your transmission processes to ensure they work reliably and without errors.
Documentation and logging of errors:
Keep detailed logging of all errors and problems in EDI transactions, including causes and actions taken.
Document solutions to recurring problems to resolve and prevent future errors more quickly.
Involve subject matter experts:
When necessary, bring in subject matter experts or EDI consultants to solve complex problems or address specific challenges.
Use resources such as forums, training, or support from EDI providers for additional assistance with troubleshooting.
By systematically applying these tips, you can effectively troubleshoot EDI transactions and ensure the reliability of your electronic business communications.
Best practices for managing EDI configurations include regular updates to adapt to changing standards, thorough testing of EDI templates, and maintaining clear documentation of all transformations and structure descriptions.
Regular updates and adaptations to changing standards:
Stay up to date with changes in the EDI standards such as ANSI X12, EDIFACT, or industry-specific standards.
Schedule regular reviews of your EDI configurations to ensure they comply with current standards.
Adapt your EDI templates and transformations accordingly to reflect new requirements and changes in the standards.
Thorough testing of EDI templates:
Perform comprehensive testing of your EDI templates to ensure they deliver the expected results.
Use both automated and manual testing methods to verify the accuracy and reliability of your transformations. Test different scenarios and edge cases to ensure your templates are robust enough to handle different data formats.
Clear documentation of all transformations and structure descriptions:
Maintain detailed documentation of all EDI transformations, including the XSLT scripts or other transformation rules you use.
Also document the structure descriptions of your EDI messages, including the segment, element and data type definitions.
Keep the documentation up to date and accessible to all team members working with the EDI implementation.
Versioning of configurations:
Implement versioning of your EDI configurations to track changes and revert to previous versions if necessary.
Use an appropriate version control system to track changes and ensure that all team members have access to the most current version.
Training and education of employees:
Ensure that your employees have the necessary knowledge and skills to effectively handle the EDI configurations.
Provide training and education to ensure your team is aware of the latest developments in EDI standards and practices.
By implementing these best practices, you can improve the efficiency, accuracy and reliability of your EDI configurations and ensure they meet the ever-changing needs of your business and your business partners.
The preview feature is an extremely useful tool to check the appearance and content of EDI messages before they are actually sent.
Here are some steps on how to use the preview feature to ensure that EDI messages meet the partner's requirements:
Previewing the EDI format:
Open the preview feature in your EDI system to get a preview of the generated EDI format. This allows you to check the layout and structure of the message to ensure that it meets the standards and specifications that your business partner expects.
Validating the data content:
Check the data content in the preview to ensure that all required fields are present and contain correct values. Make sure that data fields are placed in the correct segments and use the correct codes or labels.
Identifying formatting errors:
Ensure that the formatting of the EDI message follows standards, such as proper segment separators, field separators, and decimal separators. Also check the indentation and arrangement of segments to ensure the message is clear and easy to read.
Considering partner requirements:
Consider your business partner's specific requirements regarding the EDI format. This may include using certain segments, elements, or codes that need to be previewed to ensure they are implemented correctly.
Conducting test transactions:
Use the preview feature to conduct test transactions with your business partner before sending real data. This allows you to identify and resolve potential problems early, before they impact business operations.
Careful use of the preview feature helps you ensure that your EDI messages meet your business partner's requirements and ensure a smooth exchange of business data.
How XRechnung is Mapped in DocBits
1. Header Configuration (export_configuration.header)
The header section in the XRechnung is mapped to fields in DocBits as follows:
[export_configuration.header] name = "header"
[export_configuration.header.fields] DIVI = "RFP" IBTP = "20" IMCD = "0" CRTP = "1" CONO = "001" SUNO = "[supplier_id]" IVDT = "[invoice_date]" SINO = "[invoice_id]" SPYN = "[supplier_id]" CUCD = "[currency]" CUAM = "[total_amount]" FTCO = "[supplier_country_code]" PUNO = "[purchase_order]" CORI = "[correlation_id]" PAIN = "[sqr_field_esr_reference]" TCHG = "[additional_amount]" CDC1 = "[negative_amount]" APCD = "[buyer_id]" TEPY = "[payment_terms]" PYME = "[payment_method]" BKID = "[bank_id]" GEOC = "1" TECD = "[discount_term]" TXAP = "[tax_applicable]" TXIN = "[tax_included]"
• SUNO: Supplier ID, mapped to [supplier_id] from XRechnung.
• IVDT: Invoice Date, mapped to [invoice_date].
• SINO: Invoice Number, mapped to [invoice_id].
• Other fields such as total amount, currency, and payment terms are similarly mapped from the XRechnung to DocBits fields.
2. Tax Lines (export_configuration.tax_lines)
Tax-related information is mapped using the following configuration:
[export_configuration.tax_lines] name = "tax_lines"
[export_configuration.tax_lines.fields] RDTP = "3" DIVI = "RFP" CONO = "001" TAXT = "2" GEOC = "[[geo_code]]" TTXA = "[[amount]]" TAXC = "[[tax_code]]"
• GEOC: Geo Code, mapped to the corresponding [geo_code] from XRechnung.
• TAXC: Tax Code, mapped to [tax_code].
3. Order Header Charges (export_configuration.order_header_charges)
This section handles any additional charges that need to be added at the header level of the XRechnung.
[export_configuration.order_header_charges] name = "order_header_charges"
[export_configuration.order_header_charges.fields] RDTP = "2" DIVI = "RFP" CONO = "001" NLAM = "[[amount]]" CEID = "[[costing_element]]" CDSE = "[[charge_sequence]]"
• NLAM: Amount for the order charge.
• CEID: Costing Element, which can be mapped from specific XRechnung elements.
4. Receipt Lines (export_configuration.receipt_lines)
Receipt lines, which represent line items in the XRechnung, are handled as follows:
[export_configuration.receipt_lines] name = "receipt_lines"
[export_configuration.receipt_lines.fields] RDTP = "1" DIVI = "RFP" RELP = "1" CONO = "001" IVQA = "[[quantity]]" PUUN = "[[unit]]" PUNO = "[[purchase_order]]" PNLI = "[[line_number]]" ITNO = "[[item_number]]" POPN = "[[item_number]]" SUDO = "[[packing_slip]]" GRPR = "[[gross_unit_price]]" PPUN = "[[unit_code_price]]" TCHG = "[[charges]]" CDC1 = "[[discount]]" REPN = "[[receipt_number]]" PNLS = "[[sub_line_number]]"
• IVQA: Quantity, mapped from the [quantity] in the XRechnung line items.
• ITNO: Item Number, mapped to [item_number].
5. Cost Lines (export_configuration.cost_lines)
Cost lines, which handle additional costs in the XRechnung, are mapped using the following:
[export_configuration.cost_lines] name = "cost_lines"
[export_configuration.cost_lines.fields] RDTP = "8" DIVI = "RFP" CONO = "001" NLAM = "[[amount]]" VTXT = "[[voucher_text]]" AO01 = "[[accounting_object_1]]" AO02 = "[[accounting_object_2]]" AO03 = "[[accounting_object_3]]" AO04 = "[[accounting_object_4]]" AO05 = "[[accounting_object_5]]" AO06 = "[[accounting_object_6]]" AO07 = "[[accounting_object_7]]" AIT1 = "[[ledger_account]]" AIT2 = "[[dimension_2]]" AIT3 = "[[dimension_3]]" AIT4 = "[[dimension_4]]" AIT5 = "[[dimension_5]]" AIT6 = "[[dimension_6]]" AIT7 = "[[dimension_7]]"
This section describes the implementation plan for importing and mapping data from XML files using the Peppol BIS Billing 3.0 schema. Peppol BIS Billing 3.0 was developed to standardize e-billing processes and ensure compliance with European standards.
Ensure full compliance with Peppol BIS Billing 3.0 specifications.
Seamless integration of e-invoice data into our accounts payable system using DocBits.
Improve data quality and processing efficiency.
The scope of this project is to map key elements of the Peppol BIS Billing 3.0 schema to our internal data structures. In particular, the mapping will cover the following areas:
Vendor and Buyer details
Invoice details
Invoice lines
Payment instructions
Tax and legal information
Vendor information:
cac:AccountingSupplierParty
cbc:EndpointID: Electronic address of the vendor
cbc:Name: Trade name of the vendor
cbc:CompanyID: Legal registration number of the vendor
cbc:StreetName, cbc:CityName, cbc:PostalZone: Address details of the vendor
Buyer information:
cac:AccountingCustomerParty
cbc:EndpointID: Electronic address of the buyer
cbc:Name: Trade name of the buyer
cbc:CompanyID: Legal registration number of the buyer
cbc:StreetName, cbc:CityName, cbc:PostalZone: Address details of the buyer
Invoice details:
cbc:ID: Invoice number
cbc:IssueDate: Issue date of the invoice
cbc:DueDate: Invoice due date
cbc:InvoiceTypeCode: Invoice type
Invoice lines:
cac:InvoiceLine
cbc:ID: Invoice line number
cbc:InvoicedQuantity: Invoiced quantity
cbc:LineExtensionAmount: Line extension amount
cbc:Description: Description of the billing position
cac:Item
cbc:Name: Item name
cbc:SellerItemIdentification/cbc:ID: Item number of the vendor
cac:Price
cbc:PriceAmount: Price per unit
cbc:BaseQuantity: Base quantity for the price
Payment instructions:
cac:PaymentMeans
cbc:PaymentMeansCode: Code to identify the payment method
cbc:PaymentID: Payment identifier
Tax information:
cac:TaxTotal
cbc:TaxAmount: Total tax amount
cac:TaxSubtotal: Details for each interim tax amount
A PDF document is generated according to a standard layout with the imported fields in order to provide the user with a preview for reference purposes. Further customization of the PDF preview layout is possible but requires additional effort.
Provide detailed instructions on how to import sample documents for training, including the format and document types to use.
To import sample documents for training, follow these steps:
Prepare the sample documents: Make sure the sample documents are in a supported format, such as PDF, Word, Excel, etc. These documents should cover a variety of types and formats that may be encountered in production operations of the document processing system.
Navigate to the import function: Log in to the administration area of the document processing system and navigate to the area where you can import new documents.
Select the option to import documents: Click the button or link to import documents. There may be an option such as "Import".
Select amount & date format:
Amount Format:
The amount format may vary by region, but in general there are some common conventions:
Currency symbol: The currency symbol is usually provided before the amount, e.g. "$" for US dollars, "€" for euros, "£" for British pounds, etc.
Thousands separator: In some countries, long numbers are separated by a thousand separator for better readability. In the US, a comma is commonly used (e.g. 1,000), while in many European countries a period is used (e.g. 1,000).
Decimal separator: The decimal separator is used to separate the integer part from the decimal places. Most English-speaking countries use a period (e.g. 10.99), while many European countries use a comma (e.g. 10.99).
The date format also varies by region, with different countries having different conventions. Here are the most common formats:
Day-Month-Year (DD-MM-YY or DD.MM.YY): In many European countries, the date is specified in day-month-year format. For example, "21.05.24" represents May 21, 2024.
Month-Day-Year (MM-DD-YY or MM/DD/YY): In the United States, the month-day-year format is often used. For example, "05/21/24" represents May 21, 2024.
Year-Month-Day (YY-MM-DD or YY/MM/DD): In some other countries, the year-month-day format is preferred. For example, "24/05/21" represents May 21, 2024.
It is important to note the specific format to avoid misunderstandings, especially in international communications or financial transactions.
Select the sample documents: Select the sample documents you want to import. This can be done by uploading the files from your local computer or by selecting documents from an already connected location.
Configure the document types and subtypes (if required): If your system supports different document types or subtypes, assign the appropriate type to each imported document. This will help the system to categorize and process the documents correctly.
Start the import process: Confirm the selection of documents and start the import process. Depending on the size and number of documents, this process may take some time.
Check the import status: Check the status of the import process to make sure that all documents were imported successfully. Make sure that no errors occurred and that the documents were processed correctly.
Train the model: After the documents are imported, use them to train the document processing system model. Perform training according to the system's instructions to make sure it can process the sample data effectively.
By regularly adding sample documents for training, you can ensure that your document processing system is always up to date and provides accurate and efficient processing.
To test the trained model and evaluate its accuracy and operational readiness, you can follow the steps below:
Preparing the test data:
Collect a representative sample of test data covering different types of documents and scenarios that the model will handle in the field. Ensure that the test data is of high quality and correctly labeled.
Running the classification tests:
Run the classification tests on the prepared test data.
Feed the test data into the model and let the model make predictions for classifying the documents.
Add a new one or edit an existing classification rule.
Evaluating the model accuracy:
Compare the model's predictions with the actual classifications of the test data. Calculate metrics such as accuracy, precision, recall, and F1 score to evaluate the model's performance. These metrics provide insight into how well the model classified the documents and how reliable it is.
Analyze errors:
Examine the errors the model made when classifying the test data and analyze their causes. Identify patterns or trends in the errors and, if necessary, make adjustments to the model to improve its performance.
Optimize the model:
Based on the results of the classification tests and error analysis, you can optimize the model by adding training data, adjusting training parameters, or changing the model architecture. Repeat the testing process to check if the optimizations improved the model's performance.
Document the results:
Document the results of the classification tests and any adjustments or optimizations made to the model. This will help you track the model's progress over time and ensure that it is constantly improving.
By regularly running classification tests and evaluating the performance of your model, you can ensure that it is suitable for use in production and delivers accurate results.
In Docbits, Regex settings allow administrators to define custom patterns that the system uses to find and extract data from documents. This feature is especially useful in situations where data needs to be extracted from unstructured text or when the data follows a predictable format that can be captured using regex patterns.
Managing Regexes:
Managing Regexes:
Add: Allows you to create a new regex pattern for a specific document type.
Save Changes: Saves modifications to existing regex configurations.
Pattern: Here, you can define the regex pattern that matches the specific data format required.
Origin: Is the Document Origin - For example you can define a different Regex in Germany
To edit existing regex patterns and ensure the changes work as expected without breaking existing functionality, you can follow the guide below:
Analyze the existing pattern:
Examine the existing regex pattern to understand what data it captures and how it works.
Identify the parts of the pattern that need to be changed and the impact of those changes on the data captured.
For example: The invoice amount is to be read out:
(?<=Rechnungsbetrag:)[\s]*((((\d+)[,.]{1,10})+\d{0,2})|(\d+(?!,)))
Rechnungsbetrag: 100.00
Read the amount with 1000s dot but NOT pass the dot
[\d.][,\d]
Allowed characters: 0123456789,
The value "P32180" is to be read out. Anchor word here is "Invoice Date".
(?<=InvoiceDate )[\s]*P\d{5}
Customer number Invoice number Invoice date P32180 613976 05/13/2019
Document the changes:
Take notes about the changes you plan to make to the regex pattern.
Note what new patterns you plan to add and what parts of the existing pattern may need to be changed or removed.
Prepare test data:
Collect test data that is representative of the different types of data the regex pattern typically captures.
Make sure your test data covers both typical and edge cases to verify the robustness of your changes.
Make changes to the regex pattern:
Make the planned changes to the regex pattern.
This may include adding new patterns, removing or adjusting existing parts, or optimizing the pattern for better performance.
Test the changes:
Apply the updated regex pattern to your test data and carefully review the results.
Verify that the pattern still correctly captures the desired data and that there are no unexpected impacts on other parts of the data or system.
Debugging and adapting:
If test results are not as expected or unexpected issues occur, carefully review your changes and make further adjustments as needed.
This may include reverting certain changes or adding additional adjustments to fix the problem.
Document the changes:
Update the documentation of your regex pattern to reflect the changes made.
Describe the updated patterns and the reasons for the changes made to help other developers understand and use the pattern.
Saving the changes:
Once you are sure that the changes are successful and work as expected, save the updated regex pattern to your code base or configuration files to ensure they are available for future use.
By following these steps and carefully testing changes to regex patterns, you can ensure that your regex pattern continues to work correctly while meeting new requirements.
Scripts in Docbits zijn doorgaans geschreven in een scripttaal die door het systeem Python wordt ondersteund. Ze worden geactiveerd tijdens de documentverwerkingsworkflow om complexe bedrijfslogica toe te passen of om de integriteit en nauwkeurigheid van gegevens te waarborgen voordat de gegevens verder worden verwerkt of opgeslagen.
Scriptbeheer:
Naam: Elk script krijgt een unieke naam ter identificatie.
Documenttype: Koppelt het script aan een specifiek documenttype, wat bepaalt op welke documenten het script zal worden toegepast.
Trigger Op: Definieert wanneer het script wordt geactiveerd (bijv. bij documentupload, vóór gegevensexport, na gegevensvalidatie).
Actieve/Inactieve Status: Hiermee kunnen beheerders scripts activeren of deactiveren zonder ze te verwijderen, wat flexibiliteit biedt bij testen en implementatie.
Scripteditor:
Biedt een interface waar scripts kunnen worden geschreven en bewerkt. De editor ondersteunt doorgaans syntaxisaccentuering, foutaccentuering en andere functies om te helpen bij de ontwikkeling van scripts.
Voorbeeldscript: Scripts kunnen bewerkingen bevatten zoals het doorlopen van factuurregels om totalen te valideren of om invoer te verwijderen die niet aan bepaalde criteria voldoet.
Logging into DocBits:
Open your web browser and log into DocBits with your credentials.
Navigate to Script Management:
Look for the option to manage scripts in the DocBits interface.
This may vary depending on your setup and configuration of DocBits.
Viewing existing scripts:
Once you are in the script management interface, you will see a list of all existing scripts.
Here you can scroll through the list to find the desired script you want to enable, disable or edit.
Enabling or disabling a script:
To enable or disable a script, find the relevant script in the list and enable or disable the script.
Make sure to save changes after making your selections.
Editing a script:
If you need to edit an existing script, look for the button in the script management interface that allows editing the script.
Click it to open the editor where you can modify the script's code.
After making your changes, save the script again.
Review and test:
Before making changes to a script, carefully review the existing code and consider what impact your changes might have.
Test the script in a test environment to make sure it works as expected.
Documentation:
Don't forget to document your changes.
Write down what changes you made and why so that other users on the team can understand how the script works and what impact your changes might have.
Publishing changes:
When you are satisfied with your changes, republish the script to the DocBits production environment for the updated version to take effect.
These steps allow you to enable, disable and manage existing scripts in DocBits to adapt them to current processing needs and ensure that your documentation processes run efficiently and correctly.
Choose the scripting language:
First, you need to choose the scripting language you want to use. DocBits typically supports common scripting languages such as Python, JavaScript, or SQL. The choice of language depends on the needs of your project and your own competency.
Open the script development environment:
Log in to DocBits and navigate to the script development environment. This is in the administration area.
Create a new script:
Click the "+ New" button to open a new script editor.
Write the code:
Use the editor to write the code for your script. Start with the basic syntax of your chosen scripting language.
For example, if you are using Python, your script might look like this:
def clean_patient_name(name): cleaned_name = name.strip().title() # Remove spaces and apply capitalization
return cleaned_name
if name == "main": patient_name = " john doe " cleaned_name = clean_patient_name(patient_name) print("Cleaned patient name:", cleaned_name)
Test the script:
Check the code for errors and test it in a test environment. Make sure the script produces the expected results and works correctly.
Save the script:
Save the script in DocBits and give it a meaningful name that describes the purpose of the script.
Mapping the script to document types:
An important step is mapping the script to the appropriate document types. This determines when and how the script is applied. This can usually be done through a configuration interface in DocBits, where you can assign the script to a specific document type and specify under which conditions it should be applied.
Review and publish:
After you have created, tested and mapped the script, check it again for errors and inconsistencies. If everything is OK, you can publish the script to the DocBits production environment.
Through these steps, you can successfully create, test and implement a new script in DocBits to automate processes and improve the efficiency of medical documentation.
In Docbits bieden de EDI-instellingen tools voor het definiëren en beheren van de structuur en het formaat van EDI-berichten die overeenkomen met verschillende documenttypes, zoals facturen of inkooporders. De instellingen maken het mogelijk om EDI-berichten aan te passen aan de normen en vereisten die specifiek zijn voor verschillende handelspartners en industrieën.
EDI Configuratie-elementen:
Structuur Descriptor: Definieert de basisstructuur van het EDI-document, inclusief segmentvolgorde, verplichte velden en kwalificaties die nodig zijn voor de geldigheid van het EDI-document.
Transformatie: Specificeert de transformaties die worden toegepast om de documentgegevens om te zetten in een EDI-geformatteerd bericht. Dit houdt doorgaans in dat er mappings worden gespecificeerd van documentvelden naar EDI-segmenten en -elementen.
Voorbeeld: Hiermee kunnen beheerders zien hoe het EDI-bericht eruit zal zien na transformatie, wat helpt om de nauwkeurigheid vóór verzending te waarborgen.
Extractiepaden: Toont de paden die worden gebruikt om waarden uit het document te extraheren, die vervolgens worden gebruikt om het EDI-bericht in te vullen.
XSLT-editor:
Gebruikt voor het bewerken en valideren van de XSLT (eXtensible Stylesheet Language Transformations) die wordt gebruikt in het transformatieproces. XSLT is een krachtige taal die is ontworpen voor het transformeren van XML-documenten naar andere XML-documenten of andere formaten zoals HTML, tekst of zelfs andere XML-structuren.
Using scripts to automate processes is critical for businesses of all sizes and in almost every industry. Not only do these scripts enable significant increases in efficiency, but they also ensure the accuracy and consistency of data, which in turn leads to informed decisions and improved operational efficiency.
Here are some key aspects of how scripts can be used to automate processes and ensure data accuracy:
Data cleansing:
Businesses often collect large amounts of data from various sources.
This data is often incomplete, inconsistent, or contains errors.
By using scripts, automated processes can be implemented to clean data, fill in missing values, remove duplicates, and correct errors.
This greatly improves the quality of the data and makes it easier to analyze and use.
Applying business rules:
Businesses often have specific business rules that need to be applied to the data being processed.
Scripts can be used to implement these rules and ensure that all data is processed according to company standards.
This can include everything from validating input data to applying compliance regulations.
Integrating data with other systems:
Often, data from different sources needs to be integrated into different systems to ensure a seamless flow of information within the organization.
Scripts can be used to automate this integration by extracting data from a source, transforming it, and loading it into the target system.
For example, this could include integrating sales data into a CRM system or transferring customer feedback into an analytics tool.
Automating repetitive tasks:
Many tasks in a business are routine and repetitive.
By using scripts, these tasks can be automated, saving time and resources. Examples include automatically generating reports, updating databases, or performing regular maintenance.
Overall, scripts play a crucial role in automating processes and ensuring data accuracy. By automating repeatable tasks and applying business rules consistently, they help increase efficiency, reduce errors, and enable informed decisions based on reliable data.
Define the structure descriptor:
Identify the type of EDI message you are working with, e.g. ANSI X12, EDIFACT, or a custom format.
Determine the segments, elements, and subelements within the EDI structure.
Create a structure descriptor that accurately reflects the hierarchy and organization of the EDI message. This can be done using a special syntax such as XML or JSON.
Set up transformations:
Use an appropriate tool or software that supports EDI transformations, such as an EDI translator.
Define the rules for converting the EDI message to your system's internal format and vice versa.
Configure the transformations to interpret and process segments, elements, and subelements according to your system's requirements. Test the transformations thoroughly to ensure that the data is correctly interpreted and formatted.
Configure extraction paths for optimal data extraction and formatting:
Identify the data fields to be extracted and transferred to your internal system.
Define extraction paths or rules to extract the relevant data fields from the EDI messages.
Consider the different variations and formats that may occur in the incoming EDI messages and ensure that the extraction paths are flexible enough to accommodate them.
Validate the extraction results to ensure that the correct data fields are extracted and correctly formatted.
By carefully defining the structure descriptor, setting up transformations and configuring extraction paths, you can ensure that data extraction and formatting are performed optimally in your EDI templates. This will help improve the efficiency and accuracy of your electronic business communications.
Momenteel worden eSLOG Factuurversies 1.6 en 2.0 ondersteund.
Voor officiële eSLOG-documentatie kunt u raadplegen.
Beide eSLOG-versies zijn standaard ingeschakeld.
Configureer eSLOG:
Navigeer naar Instellingen → Globale Instellingen → Documenttypen → Factuur.
Klik op E-Doc.
Een lijst van alle beschikbare e-docs verschijnt.
Zoek de eSLOG-versie die u wilt wijzigen.
In de transformatiesettings kunt u het pad definiëren om specifieke informatie binnen het XML-bestand te lokaliseren en deze op te slaan in een nieuwe structuur, waardoor het gemakkelijker wordt om toegang te krijgen tot de gegevens. Opmerking: Als u deze functionaliteit gebruikt, moet u de nieuw aangemaakte XML-paden gebruiken, niet de oorspronkelijke XML-paden, in de Voorbeeld en Extractiepad.
Open de Transformatie.
Maak een nieuw concept door op het potloodicoon te klikken.
Selecteer het nieuw aangemaakte concept.
Maak een nieuw veld aan of wijzig een bestaand veld.
Stel het gewenste pad in voor gegevensextractie.
Klik op Opslaan.
De Voorbeeld PDF-configuratie wordt gebruikt om een gebruikersleesbare versie van het document te genereren. U kunt het aanpassen met HTML om aan uw behoeften te voldoen.
Open de Voorbeeld.
Maak een nieuw concept door op het potloodicoon te klikken.
Selecteer het nieuw aangemaakte concept.
Maak een nieuw veld aan of wijzig een bestaand veld.
Stel het gewenste pad in voor gegevensextractie.
Klik op Opslaan.
De Extractiepadenconfiguratie wordt gebruikt om gegevens te extraheren en velden in het validatiescherm in te vullen, zoals de factuurtabel of velden die zijn geconfigureerd in de factuurlay-out.
Open de Extractiepaden.
Maak een nieuw concept door op het potloodicoon te klikken.
Selecteer het nieuw aangemaakte concept.
Maak een nieuw veld aan of wijzig een bestaand veld.
De linkerkant vertegenwoordigt de DocBits veld-ID, die te vinden is in de Instellingen → Globale Instellingen → Documenttypen → Factuur → Velden.
De rechterkant vertegenwoordigt het pad naar het veld dat is aangemaakt in de Transformatie.
Klik op Opslaan.
Het gebied Meer Instellingen stelt beheerders in staat om verschillende aspecten van documentverwerking te configureren die niet in de basisinstellingen zijn opgenomen. Dit omvat opties voor tabelextractie, documentbeoordeling, PDF-generatie, goedkeuringsprocessen en instellingen die specifiek zijn voor bepaalde operaties zoals inkooporders of boekhouding.
Tabelextractie:
Sla tabelvalidatie over: Maakt het mogelijk om het validatieproces voor tabelgegevens over te slaan, wat nuttig kan zijn in scenario's waar gegevensvalidatie flexibel moet zijn.
In Beoordeling:
Ontwerp in Beoordelingsformulier: Configureert de lay-out en velden die verschijnen in de beoordelingsformulieren die tijdens het documentbeoordelingsproces worden gebruikt.
PDF-generatie:
Ontwerp Sjabloon: Specificeert het sjabloon dat wordt gebruikt voor het genereren van PDF-versies van de documenten, wat cruciaal kan zijn voor archivering of externe communicatie.
Goedkeuring:
Goedkeuren vóór export: Zorgt ervoor dat documenten goedgekeurd moeten worden voordat ze uit het systeem kunnen worden geëxporteerd.
Tweede Goedkeuring: Voegt een extra laag van goedkeuring toe voor verdere validatie, waardoor de controle over documentverwerking wordt verbeterd.
Inkooporder / Auto Boekhouding:
PO-tabel in lay-outbouwer: Maakt het mogelijk om inkoopordertabellen op te nemen in de lay-outbouwer voor aangepaste documentlay-outs.
Inkooporder: Schakelt de verwerking van inkooporderdocumenten binnen het systeem in of uit.
PO Tolerantie-instelling: Stelt tolerantieniveaus in voor inkooporderhoeveelheden, wat helpt bij het accommoderen van kleine afwijkingen zonder ze als fouten te markeren.
Document Alternatieve Export:
PO-statussen uitschakelen: Maakt het mogelijk om bepaalde statussen voor inkooporders tijdens het exportproces uit te schakelen, wat flexibiliteit biedt in de manier waarop bestellingen worden behandeld.
Leverancier Artikelnummer Kaart:
Een hulpprogramma-instelling die leverancier artikelnummer koppelt aan interne artikelnummer, wat zorgt voor nauwkeurigheid in voorraad- en inkooporderbeheer.
In DocBits, XRechnung invoices are mapped to specific fields using a predefined configuration that ensures the data can be seamlessly exported to various formats, including integration with other systems like Infor. The export configuration leverages templates and rules to ensure that each element of the XRechnung is captured and mapped appropriately.
1. Document Types: XRechnung documents are mapped to specific Document Types in DocBits. Each document type (e.g., invoice, credit note, debit note) has its own structure and fields.
2. Field Mapping: Fields in the XRechnung are mapped to corresponding fields in DocBits using a export configuration file. This file defines how each XRechnung field is handled and where it should be exported.
3. Rules for Export: Certain rules are defined to handle specific cases where values may differ, including tolerance checks, approval requirements, or line-level charges. These rules ensure that XRechnung data is processed and exported correctly, based on specific business logic
supplier_id
N104
Unique identifier for the supplier.
supplier_name
N102
Name of the supplier.
supplier_address
N101
Address of the supplier.
supplier_tax_id
not mapped yet
Tax identification number of the supplier.
delivery_date
DTM02
Date the goods or services were delivered.
supplier_iban
not mapped yet
IBAN number of the supplier.
payment_terms
ITD12
Terms of payment specified for the invoice.
purchase_order
not mapped yet
Purchase order number associated with the invoice.
currency
CUR02
Currency used in the invoice.
net_amount
not mapped yet
Net amount before taxes.
tax_amount
not mapped yet
Total tax amount applied.
tax_rate
not mapped yet
Tax rate applied to the net amount.
net_amount_2
not mapped yet
Secondary net amount (if applicable).
tax_amount_2
not mapped yet
Secondary tax amount (if applicable).
tax_rate_2
not mapped yet
Secondary tax rate (if applicable).
total_net_amount
not mapped yet
Total net amount of the invoice.
total_tax_amount
not mapped yet
Total tax amount of the invoice.
total_amount
not mapped yet
Total amount of the invoice, including taxes.
POSITION
PO101
Position within the invoice (related to line items).
PURCHASE_ORDER
not mapped yet
Purchase order number.
ITEM_NUMBER
PO1
Item number associated with the invoice line item.
SUPPLIER_ITEM_NUMBER
REF02
Supplier's item number.
DESCRIPTION
PID05
Description of the item or service.
QUANTITY
PO102
Quantity of items or services.
UNIT
PO103
Unit of measure for the items or services.
UNIT_PRICE
PO104
Price per unit of the item or service.
VAT
not mapped yet
VAT amount for the item or service.
TOTAL_AMOUNT
PO102 * PO104
Total amount for the line item, including VAT.
AGREEMENT_NUMBER
REF02
Agreement number related to the invoice (if applicable).
TAX
(PO105)/100
General tax amount applied to the invoice.
order_date
BEG05
Date when the order was placed.
negative_amount
not mapped yet
Amount that is negative, possibly due to returns or adjustments.
charges
not mapped yet
Additional charges applied to the invoice.
order_number
BEG03
Number assigned to the order.
created_by
BEG02
Identifier or name of the person who created the invoice.
delivery_terms
BEG07
Terms related to the delivery of goods or services.
delivery_method
BEG05
Method of delivery used for the goods or services.
allowance
sum(SAC05)/100
Allowance amount provided, if any.
tax
sum(SAC05)/100
Tax amount applied to the invoice (similar to TAX above).
delivery_name
not mapped yet
Name of the recipient or entity receiving the delivery.
delivery_address_line_1
not mapped yet
First line of the delivery address.
delivery_address_line_2
not mapped yet
Second line of the delivery address (if applicable).
pickup_address
not mapped yet
Address where the goods can be picked up (if applicable).
supplier_id
not mapped yet
Unique identifier for the supplier.
supplier_name
N102
Name of the supplier.
supplier_address
N301
Address of the supplier.
supplier_tax_id
not mapped yet
Tax identification number of the supplier.
purchase_order
PRF01
Purchase order number associated with the invoice.
bill_of_landing
REF02
Bill of lading document number.
trailer_number
TD303
Number of the trailer transporting the goods.
asn_date
BSN03
Date of the Advance Shipment Notice (ASN).
vendor_delivery_number
BSN02
Delivery number assigned by the vendor.
carrier_name
TD505
Name of the carrier responsible for the shipment.
POSITION
not mapped yet
Position within the invoice (related to line items).
PURCHASE_ORDER
PRF01
Purchase order number.
ITEM_NUMBER
LIN03
Item number associated with the invoice line item.
SUPPLIER_ITEM_NUMBER
LIN05
Supplier's item number.
DESCRIPTION
not mapped yet
Description of the item or service.
QUANTITY
SN102
Quantity of items or services.
UNIT
SN103
Unit of measure for the items or services.
UNIT_PRICE
not mapped yet
Price per unit of the item or service.
VAT
not mapped yet
VAT amount for the item or service.
TOTAL_AMOUNT
not mapped yet
Total amount for the line item, including VAT.
LOT_NUMBER
LIN07
Lot number associated with the item.
SSCC
MAN02
Serial Shipping Container Code for the item.
PALLATE
REF02
Pallet information for the shipment.
MANUFACTURING
DTM02
Manufacturing date of the item.
TEMP
LIN09
Temperature conditions (if applicable).
NET_WEIGHT
PO406
Net weight of the item.
PACKAGE_NUMBER
MAN05
Package number associated with the item.
supplier_id
not mapped yet
Unique identifier for the supplier.
supplier_name
N102
Name of the supplier.
supplier_address
N301
Address of the supplier.
supplier_tax_id
not mapped yet
Tax identification number of the supplier.
invoice_id
BIG02
Unique identifier for the invoice.
invoice_date
BIG03
Date when the invoice was issued.
delivery_date
not mapped yet
Date when the goods or services were delivered.
supplier_iban
not mapped yet
International Bank Account Number of the supplier.
payment_terms
not mapped yet
Terms for payment specified in the invoice.
purchase_order
not mapped yet
Purchase order number associated with the invoice.
currency
CUR02
Currency in which the invoice is issued.
net_amount
not mapped yet
Total amount before taxes.
tax_amount
TXI02
Amount of tax applied.
tax_rate
not mapped yet
Rate at which tax is applied.
net_amount_2
not mapped yet
Additional net amount for another tax rate, if applicable.
tax_amount_2
not mapped yet
Additional tax amount for another tax rate, if applicable.
tax_rate_2
not mapped yet
Additional tax rate, if applicable.
total_net_amount
not mapped yet
Total net amount of the invoice.
total_tax_amount
not mapped yet
Total tax amount of the invoice.
total_amount
TDS01
Total amount including taxes.
POSITION
REF02
Position of the line item in the invoice.
PURCHASE_ORDER
REF02
Purchase order number referenced in the invoice.
ITEM_NUMBER
REF02
Number identifying the line item.
SUPPLIER_ITEM_NUMBER
not mapped yet
Item number assigned by the supplier.
DESCRIPTION
not mapped yet
Description of the line item.
QUANTITY
IT102
Quantity of items.
UNIT
IT103
Unit of measure for the item.
UNIT_PRICE
IT104
Price per unit of the item.
VAT
not mapped yet
Value-added tax applied to the item.
TOTAL_AMOUNT
IT102 * IT104
Total amount for the line item including taxes.
order_date
not mapped yet
Date when the order was placed.
invoice_sub_type
not mapped yet
Sub-type of the invoice, if applicable.
invoice_type
not mapped yet
Type of the invoice (e.g., standard, credit, debit).
due_date
not mapped yet
Date by which payment is due.
negative_amount
SAC02
Amount representing a credit or reduction.
additional_amount
not mapped yet
Additional amount not covered by other fields.
total_net_amount_us
not mapped yet
Total net amount in USD.
purchase_order_supplier_id
not mapped yet
Supplier's ID related to the purchase order.
purchase_order_supplier_name
not mapped yet
Name of the supplier related to the purchase order.
purchase_order_warehouse_id
not mapped yet
Warehouse ID associated with the purchase order.
purchase_order_location_id
not mapped yet
Location ID related to the purchase order.
ship_to_party_id
not mapped yet
Identifier for the party to whom goods are shipped.
ship_to_party_name
not mapped yet
Name of the party to whom goods are shipped.
buyer_id
not mapped yet
Identifier for the buyer.
buyer_name
not mapped yet
Name of the buyer.
tax_code
not mapped yet
Code representing the tax applied.
tax_code_2
not mapped yet
Additional tax code, if applicable.
net_amount_3
not mapped yet
Another net amount, if applicable.
tax_amount_3
not mapped yet
Additional tax amount, if applicable.
tax_rate_3
not mapped yet
Additional tax rate, if applicable.
tax_code_3
not mapped yet
Additional tax code, if applicable.
additional_amount_2
not mapped yet
Additional amount not covered by other fields.
additional_amount_3
not mapped yet
Another additional amount, if applicable.
negative_amount_2
not mapped yet
Additional negative amount, if applicable.
negative_amount_3
not mapped yet
Another negative amount, if applicable.
shipping_charges
not mapped yet
Charges for shipping included in the invoice.
sales_tax
not mapped yet
Tax applied on sales.
sub_tax
not mapped yet
Sub-tax applied, if applicable.
wi_tax
not mapped yet
Withholding tax applied, if applicable.
county_tax
not mapped yet
Tax applied at the county level.
city_tax
not mapped yet
Tax applied at the city level.
custom_field_1
not mapped yet
Custom field for additional data.
custom_field_2
not mapped yet
Additional custom field.
custom_field_3
not mapped yet
Additional custom field.
custom_field_4
not mapped yet
Additional custom field.
custom_field_5
not mapped yet
Additional custom field.
custom_field_6
not mapped yet
Additional custom field.
custom_field_7
not mapped yet
Additional custom field.
custom_field_8
not mapped yet
Additional custom field.
custom_field_9
not mapped yet
Additional custom field.
custom_field_10
not mapped yet
Additional custom field.
firma
not mapped yet
Company or firm name.
name
not mapped yet
General name field.
strasse
not mapped yet
Street address of the supplier.
postleitzahl
not mapped yet
Postal code of the supplier's address.
id_nummer
not mapped yet
Identification number for the entity.
supplier_id
N104
Unique identifier for the supplier.
supplier_name
N102
Name of the supplier.
supplier_address
N301
Address of the supplier.
supplier_tax_id
not mapped yet
Tax identification number of the supplier.
invoice_id
not mapped yet
Unique identifier for the invoice.
invoice_date
not mapped yet
Date the invoice was issued.
delivery_date
not mapped yet
Date the goods or services were delivered.
supplier_iban
not mapped yet
IBAN number of the supplier.
payment_terms
not mapped yet
Terms of payment specified for the invoice.
purchase_order
BAK03
Purchase order number associated with the invoice.
currency
CUR02
Currency used in the invoice.
net_amount
not mapped yet
Net amount before taxes.
tax_amount
not mapped yet
Total tax amount applied.
tax_rate
not mapped yet
Tax rate applied to the net amount.
net_amount_2
not mapped yet
Secondary net amount (if applicable).
tax_amount_2
not mapped yet
Secondary tax amount (if applicable).
tax_rate_2
not mapped yet
Secondary tax rate (if applicable).
total_net_amount
not mapped yet
Total net amount of the invoice.
total_tax_amount
not mapped yet
Total tax amount of the invoice.
total_amount
not mapped yet
Total amount of the invoice, including taxes.
order_date
not mapped yet
Date when the order was placed.
document_date
BAK04
Date of the document creation or issue.
POSITION
PO101
Position within the invoice (related to line items).
PURCHASE_ORDER
not mapped yet
Purchase order number.
ITEM_NUMBER
PO107
Item number associated with the invoice line item.
SUPPLIER_ITEM_NUMBER
not mapped yet
Supplier's item number.
DESCRIPTION
PO105
Description of the item or service.
QUANTITY
ACK02, PO102
Quantity of items or services.
UNIT
PO103
Unit of measure for the items or services.
UNIT_PRICE
ACK02, PO104
Price per unit of the item or service.
VAT
not mapped yet
VAT amount for the item or service.
TOTAL_AMOUNT
(ACK02 * ACK02), (PO102 * PO104)
Total amount for the line item, including VAT.
PROMISED_DELIVERY_DATE
DTM02
Promised delivery date for the goods or services.
net_amount_3
not mapped yet
Tertiary net amount (if applicable).
tax_amount_3
not mapped yet
Tertiary tax amount (if applicable).
tax_rate_3
not mapped yet
Tertiary tax rate (if applicable).
custom_field_1
not mapped yet
Custom field for additional information (1).
custom_field_2
not mapped yet
Custom field for additional information (2).
custom_field_3
not mapped yet
Custom field for additional information (3).
custom_field_4
not mapped yet
Custom field for additional information (4).
custom_field_5
not mapped yet
Custom field for additional information (5).
custom_field_6
not mapped yet
Custom field for additional information (6).
custom_field_7
not mapped yet
Custom field for additional information (7).
custom_field_8
not mapped yet
Custom field for additional information (8).
custom_field_9
not mapped yet
Custom field for additional information (9).
custom_field_10
not mapped yet
Custom field for additional information (10).
supplier_id
<ram:SellerTradeParty><ram:ID>
Supplier's identification number.
supplier_name
<ram:SellerTradeParty><ram:Name>
Supplier's name.
supplier_address
<ram:SellerTradeParty><ram:PostalTradeAddress><ram:LineOne>
Supplier's address line one.
supplier_tax_id
<ram:SellerTradeParty><ram:SpecifiedTaxRegistration><ram:ID>
Supplier's tax identification number.
company_id
<ram:InvoiceeTradeParty><ram:ID>
Company's identification number.
company_name
<ram:InvoiceeTradeParty><ram:Name>
Company's name.
company_street
<ram:InvoiceeTradeParty><ram:PostalTradeAddress><ram:LineOne>
Company's address line one.
company_plz
<ram:InvoiceeTradeParty><ram:PostalTradeAddress><ram:PostcodeCode>
Company's postal code.
company_vat
<ram:InvoiceeTradeParty><ram:SpecifiedTaxRegistration><ram:ID>
Company's VAT number.
invoice_id
<rsm:ExchangedDocument><ram:ID>
Invoice identification number.
invoice_date
<ram:IssueDateTime><ram:DateTimeString>
Date when the invoice was issued.
delivery_date
<ram:ApplicableHeaderTradeDelivery><ram:ActualDeliverySupplyChainEvent><ram:OccurrenceDateTime><ram:DateTimeString>
Date of actual delivery.
supplier_iban
<ram:PayeePartyCreditorFinancialAccount><ram:IBANID>
Supplier's IBAN number.
payment_terms
<ram:SpecifiedTradePaymentTerms><ram:Description>
Payment terms description.
purchase_order
<ram:BuyerOrderReferencedDocument><ram:IssuerAssignedID>
Reference to the purchase order.
currency
<ram:InvoiceCurrencyCode>
Currency used in the invoice.
net_amount
<ram:ApplicableTradeTax><ram:BasisAmount>
Net amount before tax.
tax_amount
<ram:ApplicableTradeTax><ram:CalculatedAmount>
Amount of tax.
tax_rate
<ram:ApplicableTradeTax><ram:RateApplicablePercent>
VAT rate applied.
net_amount_2
<ram:ApplicableTradeTax><ram:BasisAmount>
Net amount before tax.
tax_amount_2
<ram:ApplicableTradeTax><ram:CalculatedAmount>
Amount of tax.
tax_rate_2
<ram:ApplicableTradeTax><ram:RateApplicablePercent>
VAT rate applied.
total_net_amount
<ram:SpecifiedTradeSettlementHeaderMonetarySummation><ram:TaxBasisTotalAmount>
Total net amount before tax.
total_tax_amount
<ram:SpecifiedTradeSettlementHeaderMonetarySummation><ram:TaxTotalAmount>
Total tax amount.
total_amount
<ram:SpecifiedTradeSettlementHeaderMonetarySummation><ram:GrandTotalAmount>
Total invoice amount.
POSITION
<ram:AssociatedDocumentLineDocument><ram:LineID>
Line position number in the invoice.
PURCHASE_ORDER
<ram:BuyerOrderReferencedDocument><ram:IssuerAssignedID>
Purchase order reference.
ITEM_NUMBER
<ram:SpecifiedTradeProduct><ram:SellerAssignedID>
Item number assigned by the seller.
SUPPLIER_ITEM_NUMBER
<ram:SpecifiedTradeProduct><ram:GlobalID>
Global item number assigned by the supplier.
DESCRIPTION
<ram:SpecifiedTradeProduct><ram:Name>
Description of the item.
QUANTITY
<ram:SpecifiedLineTradeDelivery><ram:BilledQuantity>
Quantity of items billed.
UNIT
<ram:BilledQuantity>unitCode
Unit of measure for the quantity.
UNIT_PRICE
<ram:SpecifiedLineTradeAgreement><ram:NetPriceProductTradePrice><ram:ChargeAmount>
Unit price of the item.
VAT
<ram:SpecifiedLineTradeSettlement><ram:ApplicableTradeTax><ram:RateApplicablePercent>
VAT rate applied to the line item.
TOTAL_AMOUNT
<ram:SpecifiedLineTradeSettlement><ram:SpecifiedTradeSettlementLineMonetarySummation><ram:LineTotalAmount>
Total amount for the line item.
order_date
not mapped yet
Date of the order.
invoice_sub_type
not mapped yet
Sub-type of the invoice.
invoice_type
not mapped yet
Type of the invoice.
due_date
not mapped yet
Due date for payment.
negative_amount
not mapped yet
Amount with a negative value.
charges
not mapped yet
Additional charges.
accounting_date
not mapped yet
Date for accounting purposes.
supplier_country_code
not mapped yet
Country code of the supplier.
tax_country_1
not mapped yet
Country code for tax purposes.
correlation_id
not mapped yet
Identifier for correlation.
sqr_field_esr_reference
not mapped yet
Reference for SQR field ESR.
additional_amount
not mapped yet
Additional amount in the invoice.
authorised_user
not mapped yet
User authorized for the transaction.
payment_method
not mapped yet
Method of payment used.
bank_id
not mapped yet
Identification of the bank.
geo_code
not mapped yet
Geographical code.
discount_term
not mapped yet
Terms for any discount applied.
total_net_amount_us
not mapped yet
Total net amount in USD.
purchase_order_supplier_id
not mapped yet
Supplier's ID in the purchase order.
purchase_order_supplier_name
not mapped yet
Supplier's name in the purchase order.
purchase_order_warehouse_id
not mapped yet
Warehouse ID for the purchase order.
purchase_order_location_id
not mapped yet
Location ID for the purchase order.
ship_to_party_id
not mapped yet
ID of the party receiving the shipment.
ship_to_party_name
not mapped yet
Name of the party receiving the shipment.
buyer_id
<ram:BuyerTradeParty><ram:ID>
Buyer's identification number.
buyer_name
<ram:BuyerTradeParty><ram:Name>
Buyer's name.
tax_code
not mapped yet
Code for tax purposes.
tax_code_2
not mapped yet
Secondary tax code.
net_amount_3
not mapped yet
Net amount with a third tax rate.
tax_amount_3
not mapped yet
Tax amount with a third tax rate.
tax_rate_3
not mapped yet
Third tax rate applied.
tax_code_3
not mapped yet
Tertiary tax code.
additional_amount_2
not mapped yet
Additional amount 2.
additional_amount_3
not mapped yet
Additional amount 3.
negative_amount_2
not mapped yet
Second negative amount.
negative_amount_3
not mapped yet
Third negative amount.
shipping_charges
not mapped yet
Charges for shipping.
sales_tax
not mapped yet
Sales tax amount.
sub_tax
not mapped yet
Sub-tax amount.
wi_tax
not mapped yet
Withholding tax amount.
county_tax
not mapped yet
County tax amount.
city_tax
not mapped yet
City tax amount.
custom_field_1
not mapped yet
Custom field 1.
custom_field_2
not mapped yet
Custom field 2.
custom_field_3
not mapped yet
Custom field 3.
custom_field_4
not mapped yet
Custom field 4.
custom_field_5
not mapped yet
Custom field 5.
custom_field_6
not mapped yet
Custom field 6.
custom_field_7
not mapped yet
Custom field 7.
custom_field_8
not mapped yet
Custom field 8.
custom_field_9
not mapped yet
Custom field 9.
custom_field_10
not mapped yet
Custom field 10.
firma
not mapped yet
Company name.
name
not mapped yet
Name of the company or individual.
strasse
not mapped yet
Street address.
postleitzahl
not mapped yet
Postal code.
id_nummer
not mapped yet
Identification number.
supplier_id
not mapped yet
Supplier's identification number.
supplier_name
not mapped yet
Supplier's name.
supplier_address
not mapped yet
Supplier's address line one.
supplier_tax_id
not mapped yet
Supplier's tax identification number.
company_id
not mapped yet
Company's identification number.
company_name
not mapped yet
Company's name.
company_street
not mapped yet
Company's address line one.
company_plz
not mapped yet
Company's postal code.
company_vat
not mapped yet
Company's VAT number.
invoice_id
not mapped yet
Invoice identification number.
invoice_date
not mapped yet
Date when the invoice was issued.
delivery_date
not mapped yet
Date of actual delivery.
supplier_iban
not mapped yet
Supplier's IBAN number.
payment_terms
not mapped yet
Payment terms description.
purchase_order
not mapped yet
Reference to the purchase order.
currency
not mapped yet
Currency used in the invoice.
net_amount
not mapped yet
Net amount before tax.
tax_amount
not mapped yet
Amount of tax.
tax_rate
not mapped yet
VAT rate applied.
net_amount_2
not mapped yet
Net amount before tax.
tax_amount_2
not mapped yet
Amount of tax.
tax_rate_2
not mapped yet
VAT rate applied.
total_net_amount
not mapped yet
Total net amount before tax.
total_tax_amount
not mapped yet
Total tax amount.
total_amount
not mapped yet
Total invoice amount.
POSITION
not mapped yet
Line position number in the invoice.
PURCHASE_ORDER
not mapped yet
Purchase order reference.
ITEM_NUMBER
not mapped yet
Item number assigned by the seller.
SUPPLIER_ITEM_NUMBER
not mapped yet
Global item number assigned by the supplier.
DESCRIPTION
not mapped yet
Description of the item.
QUANTITY
not mapped yet
Quantity of items billed.
UNIT
not mapped yet
Unit of measure for the quantity.
UNIT_PRICE
not mapped yet
Unit price of the item.
VAT
not mapped yet
VAT rate applied to the line item.
TOTAL_AMOUNT
not mapped yet
Total amount for the line item.
order_date
not mapped yet
Date of the order.
invoice_sub_type
not mapped yet
Sub-type of the invoice.
invoice_type
not mapped yet
Type of the invoice.
due_date
not mapped yet
Due date for payment.
negative_amount
not mapped yet
Amount with a negative value.
charges
not mapped yet
Additional charges.
accounting_date
not mapped yet
Date for accounting purposes.
supplier_country_code
not mapped yet
Country code of the supplier.
tax_country_1
not mapped yet
Country code for tax purposes.
correlation_id
not mapped yet
Identifier for correlation.
sqr_field_esr_reference
not mapped yet
Reference for SQR field ESR.
additional_amount
not mapped yet
Additional amount in the invoice.
authorised_user
not mapped yet
User authorized for the transaction.
payment_method
not mapped yet
Method of payment used.
bank_id
not mapped yet
Identification of the bank.
geo_code
not mapped yet
Geographical code.
discount_term
not mapped yet
Terms for any discount applied.
total_net_amount_us
not mapped yet
Total net amount in USD.
purchase_order_supplier_id
not mapped yet
Supplier's ID in the purchase order.
purchase_order_supplier_name
not mapped yet
Supplier's name in the purchase order.
purchase_order_warehouse_id
not mapped yet
Warehouse ID for the purchase order.
purchase_order_location_id
not mapped yet
Location ID for the purchase order.
ship_to_party_id
not mapped yet
ID of the party receiving the shipment.
ship_to_party_name
not mapped yet
Name of the party receiving the shipment.
buyer_id
not mapped yet
Buyer's identification number.
buyer_name
not mapped yet
Buyer's name.
tax_code
not mapped yet
Code for tax purposes.
tax_code_2
not mapped yet
Secondary tax code.
net_amount_3
not mapped yet
Net amount with a third tax rate.
tax_amount_3
not mapped yet
Tax amount with a third tax rate.
tax_rate_3
not mapped yet
Third tax rate applied.
tax_code_3
not mapped yet
Tertiary tax code.
additional_amount_2
not mapped yet
Additional amount 2.
additional_amount_3
not mapped yet
Additional amount 3.
negative_amount_2
not mapped yet
Second negative amount.
negative_amount_3
not mapped yet
Third negative amount.
shipping_charges
not mapped yet
Charges for shipping.
sales_tax
not mapped yet
Sales tax amount.
sub_tax
not mapped yet
Sub-tax amount.
wi_tax
not mapped yet
Withholding tax amount.
county_tax
not mapped yet
County tax amount.
city_tax
not mapped yet
City tax amount.
custom_field_1
not mapped yet
Custom field 1.
custom_field_2
not mapped yet
Custom field 2.
custom_field_3
not mapped yet
Custom field 3.
custom_field_4
not mapped yet
Custom field 4.
custom_field_5
not mapped yet
Custom field 5.
custom_field_6
not mapped yet
Custom field 6.
custom_field_7
not mapped yet
Custom field 7.
custom_field_8
not mapped yet
Custom field 8.
custom_field_9
not mapped yet
Custom field 9.
custom_field_10
not mapped yet
Custom field 10.
firma
not mapped yet
Company name.
name
not mapped yet
Name of the company or individual.
strasse
not mapped yet
Street address.
postleitzahl
not mapped yet
Postal code.
id_nummer
not mapped yet
Identification number.