Mobile2b logo Apps Preise
Demo buchen

Security FAQ Documentation

General

What is Mobile2b?

TODO

How much data can I store in Mobile2b?

TODO

Do we have full access to our data on Mobile2b?

TODO


Technology

What is the underlying IT architecture?

TODO


Service Level

Is Mobile2b reliable to use for business-critical processes?

We offer different Service Level Agreements (SLA) for any use case. TODO

What other services are offered in combination with Mobile2b?

See our service catalog TODO


IT Security

Is my data secure on Mobile2b?

We firmly believe that Mobile2b is one of the most secure places where you can store your data.

TODO

Because security is such an important topic for us, we have created a dedicated document on this that we call our IT Security Manifesto (German, English).

We also created a detailed breakdown of our security measures here: https://docs.google.com/spreadsheets/d/1orHIZ06gx5CX-AC7ww4xJZIC-xxvEYoV/


GDPR

Does Mobile2b comply with GDPR?

TODO

We offer a template for a Data Processing Agreement (German) that you can use. Alternatively, you can use the information provided in this template for your own documents.

List of Sub-Processors (English)

Technical and Organisational Measures (TOM) (German)

For any further information, please feel free to contact our data protection officer:

RA Friederike Scholz
Hohenstaufenring 58
50674 Köln
Germany
+49 221 420 424 54
scholz@ra-scholz.eu


Who is behind Mobile2b?

Mobile2b is developed and maintained by:

Mobile2b GmbH
Im Mediapark 5
50670 Köln
Germany
+49 221 630 608 560
info@mobile2b.com
D-U-N-S® 34-232-7118

What happens when we decide we no longer want to use Mobile2b?

TODO

What if something happens to Mobile2b, the company behind Mobile2b?

In the unlikely event that Mobile2b stops its operations (e.g. insolvency, liquidation, etc.) or stops to actively maintain the Mobile2b platform, we offer a Contingency Agreement (German) to our customers.

No matter what, you should feel confident that you can build your future digital business on top of Mobile2b.



Are all users well-trained/informed about their responsibilities of the correct handling of information, and is that training ensured by information/system owner?

Are contractors accessing/handling customer data required to sign a NDA?

Yes, according to our data protection policy, all stakeholders handling customer data have a NDA with Mobile2b.

Can all employees', contractors', and partners' user identities be centrally managed?

Yes, we offer the option to integrate any SSO provider to log into Mobile2b.

If applicable, are consumer identities centrally managed?

Yes, since SSO Login is used all the users which are allowed to access the system are centrally managed in azure

Are access rights managed?

  • Access rights management via an appropriate process (also with regard to the principle of least privileges (need to know principle)), which is determined, documented and in place before the IT ystem is put into operation:
  • Assign, modify, revoke access rights
  • Information may be re-distributed on a business related need-to-know basis to other people.

Yes, RBAC is present. The permission of a role can be defined only by admins. Since SSO is utilizied the role a user gets can be matched based on the azure groups he is part of.

Example: if group equals mobile2b_admin then the corresponding user will get the admin role in the environment.

Must the information owner approve your access rights management process?

TODO

Are access rights regularly checked by the application owner or application steward?

TODO

Are users restricted from raising their own permissions?

Permissions set by admin role

Do you have a password management policy? (i.e. password length, expiration, complexity, history, etc.)

Yes, see our user management documentation for the password policy.

Is two-factor authentication used for authentication (administrative and customer)?

Azure SSO is being used. Accounts are automatically created in the application if the azure user is part of a specific group. If the user is not part of that group then the login is revoked

Is this a customer-managed user environment (e.g.: customer desktops, laptops, smartphones; virtual customer environments (e.g., desktop, application, capsule))?

Yes, this is a customer-managed user environment. While the web application can be accessed from devices managed by customer, it is not limited exclusively to customer-managed devices. Users may also access the application from non-customer managed devices.

Do you utilize any immobilized workstations in order to avoid unnecessary meandering of information?

Do you prevent long-term storage (as defined by the data owner) in your local user working environments?

Is data transfer always secured (e.g., encrypted email, encrypted file transfer, client server connections using state-of-the art encryption protocols)? What version of TLS and what cipher suites/algorithms are used to encrypt data in transit? Please provide url that customer will use to access the solution

Please check our IT Security Manifesto for details on our encryption policy.

Do you enforce encryption of communication between client and server/application - preferably by the server/application?

Please check our IT Security Manifesto for details on our encryption policy.

Is data only redistributed based upon the "need-to-know principle"?

Yes

Is data physically (e.g. extra disks) and/or logically (e.g. in extra databases) separated or masked from other customers' data? (How is customer data stored? In its own database, own VPC? Or is it a shared database? ) If data is in its own database, is the application layer shared with other customers or is this customer specific?

Yes, data is logically separated from other customers' data. While data is stored in the same database as other customer data, it is logically segregated using account identifiers. Each customer's data is associated with a unique account identifier, ensuring that it remains distinct and inaccessible to other customers. The application layer is shared among all customers, but the data segregation at the database level ensures customer-specific privacy and security.

Does the hosting environment meet customer's "basic protection" and "hardening" control requirements?

See the latest CIS benchmark results for Google Kubernetes Engine (GKE).

Malware protection

We utilize Google Container Analysis to detect malware in our container images before deployed https://cloud.google.com/artifact-analysis/docs/artifact-analysis We have network policies to control traffic between all our workload, limiting the potential spread of malware within the cluster We keep our Kubernetes clusters, nodes, and container images up-to-date with the latest security patches

Firewall protection

Please relate to https://cloud.google.com/kubernetes-engine/docs/concepts/firewall-rules?hl=de

Security patch management process

We regulary scan our container images and Kubernetes nodes for known vulnerabilities using tools like Google Container Analysis We utilize GKE’s automatic upgrades feature to ensure our control plane and nodes are regularly updated with the latest patches We gather feedback from stakeholders and review the patch management process regularly to identify areas for improvement We conduct regular penetration tests (please see related assessment)

Auto-logoff/screensaver

We utilize json web tokens. Users have a session active for 15 minutes and then automatically refreshed. After seven days, users are automatically logged out of the system and must log in again

No installation of unnecessary components

Deactivate unnecessary programs, services, commands

Restrict/Control usage of privileged programs, system tools, and file shares

Regular security checks

Constant elimination of well-known vulnerabilities

Yes
Firewall Protection: While we do not have a firewall on the ingress controller (every traffic is allowed), our infrastructure incorporates other security measures to safeguard against unauthorized access.

  • We utilize Brute-Force protection, and application-level security controls to mitigate risks and ensure the integrity of our environment. Auto Logoff/Screensaver: Our authentication system utilizes JSON Web Tokens (JWTs), where the refresh token is stored in the local storage with a maximum duration of one week. Access tokens are valid for 15 Minutes. Furthermore, our application undergoes frequent penetration test, and we prioritize the constant elimination of well-known vulnerabilities.

Do you have an access rights management process?

Yes, we implement access rights management within our Kubernetes environment. We leverage Google Cloud's RBAC, assigning limited rights to service accounts at the Kubernetes level. For example, our GitLab CI/CD pipeline utilizes a specific service account for image updates, while individual users have kubeconfig files with appropriate access levels based on their roles.

Is data flow control out of the respective environment (incl. deletion / destruction) managed by the user according to the "need-to-know principle"?

yes, Our application's data flow adheres to the 'need-to-know principle,' including the use of Mailgun for email sending. We transmit only essential data to Mailgun, ensuring security through encryption and secure protocols. Example:

Admins of the platform have the ability to configure the email details such as the subject, recipient, and content. This configuration is limited to what is necessary for the task of sending an email. When an admin decides to send an email, they are directly controlling the flow of their data out of our environment to the external email recipient. All actions related to data flow, including sending emails and deleting configurations, are logged for audit purposes. This ensures that there is a trail of user actions which can be reviewed for compliance.

Is all unnecessary Internet access disabled for the application?

Yes, It requires internet access for essential functionalities such as interacting with the Mailgun API. While there are no restrictions on internet access within our Kubernetes environment, our application itself does not access unnecessary external websites.

Do you have measures in place to protect your backups? Are backups encrypted? and provide algorithm

See IT Security Manifesto.

Are users able to see that information stored in your system is classified as "Restricted" customer data? (i.e. can users see that information stored in your system is classified (e.g. labeled as "internal" or "restricted" per customer's classification system)?

Does your organization follow professional system management processes according to ITIL, ISO20000, or similar compliance standards?

Yes, we are fully TISAX-certified.

Does your organization follow information security management processes according to ITIL, ISO27001, or similar compliance standards?

Yes, we are fully TISAX-certified.

Do you have regular provisions of any information security program certificates such as ISO27001, SOC2, SOC1, HITRUST, etc.?

Yes, we are fully TISAX-certified.

Do you permit audits on demand by customer or trusted third parties? If yes, can you provide the report? Also, do you permit clients to perform onsite audits upon request?

Although Mobile2b is conducting regular Penetration test, we are open to collaborate with you for additional Penetration Tests. (Contact us for contractual details).

Are you able to report relevant incidents of customer to customer in time within 24 hours?

Please refer to our >SLA.

Are your administrators well-trained and committed to ensuring the confidentiality, integrity, and availability of customer data?

Yes, we are fully TISAX-certified.

Do you log and monitor administrative activities on systems that process, transmit, and/or store customer's data?

>Link

Do you use Privileged Access Management tools like CyberArk or a jump server to monitor privileged access? If not, how are administrative activities tracked and monitored? How do you ensure admins and other employees aren't extracting customer data from your environment? Can anyone place information on a usb, send through email, etc.?

Logging and Monitoring Administrative Activities

We maintain audit logs for various activities such as exporting data, importing data, deleting, and updating objects, enabling us to track and review administrative actions.

Use of Privileged Access Management Tools

While we currently do not utilize Privileged Access Management tools like CyberArk or a jump server, our audit logs capture administrative activities, including those performed by privileged users. These logs are regularly reviewed to ensure compliance and detect any unauthorized access or activities.

Prevention of Data Extraction

To prevent unauthorized data extraction, we rely on role-based access control (RBAC) with finely-tuned permissions. Access to sensitive data is restricted based on job roles and responsibilities, ensuring that employees only have access to the data necessary to perform their duties. Regular reviews and monitoring of access permissions further mitigate the risk of unauthorized data extraction.

Do you log and monitor security events on systems that process, transmit, and/or store customer's data? How do you become aware of malicious events? How do you monitor your network and application traffic for suspicious activity and threats?

Yes, we utilize Prometheus & Loki to gather metrics & logs from our Kubernetes cluster, including network activity and resource usage and application data. Grafana visualizes these metrics and triggers alerts via email and Slack when specific thresholds are exceeded, enabling us to promptly detect and respond to potential security incidents

Do you delete/remove customer's data from your systems? (e.g. deletion of files, overwriting of data, etc.).

Link

Who is your hosting provider? Are they ISO 27001 certified? Do they have SOC 2 Type 2 audit reports available for review?

Link

Our hosting provider is Google Cloud Platform (GCP). Google Cloud is ISO 27001 certified. As for SOC 2 Type 2 audit reports, Google Cloud typically undergoes regular audits to assess the effectiveness of their controls and processes

Do you conduct 3 rd party network penetration tests against your infrastructure at least annually?

Link

Do you follow a documented secure software development lifecycle?

  • Yes, every code modification undergoes a review process, adhering to the principle of requiring at least two qualified reviewers.
  • We also integrate SonarQube's Quality Gates into our review process, which automatically assesses the code against predefined criteria before it is considered for approval
  • We leverage SonarQube for both code quality assessments and security vulnerability detection as an part of our development pipeline
  • This automated analysis is triggered with every change to the codebase, ensuring continuous identification of potential security issues before deployment to production
  • Additionally, we utilize Google Container Analysis to scan vulnerabilities in our Docker images at the Kubernetes level, enhancing our security measures and maintaining the integrity of our containerized applications

Do you conduct code reviews prior to production implementation? What about any Static/Dynamic application security vulnerability testing (DAST/SAST)?

  • Every code modification undergoes a review process, adhering to the principle of requiring at least two qualified reviewers.
  • We also integrate SonarQube's Quality Gates into our review process, which automatically assesses the code against predefined criteria before it is considered for approval
  • We leverage SonarQube for both code quality assessments and security vulnerability detection as an part of our development pipeline
  • This automated analysis is triggered with every change to the codebase, ensuring continuous identification of potential security issues before deployment to production
  • Additionally, we utilize Google Container Analysis to scan vulnerabilities in our Docker images at the Kubernetes level, enhancing our security measures and maintaining the integrity of our containerized applications

Do you verify that all of your software suppliers adhere to industry standards for Systems/Software Development Lifecycle (SDLC) security?

  • We ensure the security of our application's by reviwing their security practices and maintaining up-to-date versions
  • This process involves regularly reviewing security advisories, applying timely patches, and implementing Kubernetes security updates to mitigate risks
  • Being TISAX certified, we consistently review and ensure that the software we utilize complies with the stringent security guidelines required under TISAX

Do you review your applications for security vulnerabilities and address any issues prior to deployment to production?

  1. Automated Code Analysis: Utilizing SonarQube, we perform automated security scanning and code quality checks on every code change
  2. Container and Cluster Security: Given our use of Kubernetes for orchestrating all our applications, we implement security measures at the container level and throughout our cluster. This includes network policies, role-based access control (RBAC), and secure container configurations.
  3. Monitoring and Logging: With Grafana, Loki, and Prometheus, we have a robust monitoring and logging framework that allows us to detect and respond to security incidents in real-time
  4. Regular Security Reviews and Updates: Beyond automated tools, we conduct regular security reviews and update our dependencies and containers to mitigate known vulnerabilities

Do you encrypt data at rest (on disk/storage) within your environment? If so, what algorithm is used?

See IT Security Manifesto: Encryption

Is customer data encrypted with its own key?

No

What key generation/management solution do you use? Do you generate/upload/manage your own keys? How are they accessed? How often are they rotated?

See >IT Security Manifesto: At-rest encryption

Does customer get administrative accounts were they can create roles and give access to other customer users? Is customer able to see user activity logs within the application? Example: user logins, access to data, change of data, change to user permission, admin activities?

Link

customer admins are able to see (limited) audit logs:

  • who Exported Data
  • who Imported Data
  • who Created Data
  • who Deleted Data
  • who Edited Data

Do you monitor and log user logins, access to data, change of data, change to user permission, admin activities? If not, is this activity something that can be exported to customer on a frequent basis?

Link

Do you have a DLP solution? If so, what solution?

See IT Security Manifesto: Backup and recovery

Do you support SAML 2.0 SSO?

Yes

Do you have documented business continuity and redundancy plans and disaster recovery plans? How often are these plans tested?

See IT Security Manifesto: Backup and recovery

Will you be providing customer with a mobile application?

Yes see Link

How do you address security with your hosting provider and any other providers?

Annual supplier/provider reviews as part of TISAX.

Link

Do you have any contractual obligations with your providers to ensure security requirements are met?

  • DPAs + SLAs

How is data handled and deleted at the end of service? Is this documented in your contract?

See IT Security Manifesto: Data deletion

Please review topic "Datenlöschung" at the Security Manifest: https://docs.google.com/document/d/1PDrrE63jA9AeSNuO5Xrnqmij6Dq1hR3ow2uJnL0UNcY/edit#heading=h.e46gnbwr79jn

Do you conduct network, operating system and application layer vulnerability scans regularly? If so, how often?

We utilize Google Container Analysis to scan vulnerabilities in our Docker images at the Kubernetes level, enhancing our security measures and maintaining the integrity of our containerized applications Additionally, we conduct regular penetration tests

Do you have a capability to rapidly patch vulnerabilities across all of your computing devices, applications, and systems? Please specify your patching timeframes for Critical/High/Medium findings

Deployments, including updates and patches, are automatically applied to customer's environment, eliminating the need for manual user intervention

  • Hotfixes: Deployable at any time to address urgent issues
  • Feature Releases: Scheduled every 4-6 weeks, following thorough testing to ensure quality
  • Changelog and Schedule Communication: We maintain a detailed changelog that includes information on when the next update is scheduled and details about the changes or new features being applied. This changelog helps keep all stakeholders informed about upcoming updates
  • Discord Notifications: We have a dedicated Discord channel that notifies stakeholders when a new update has been deployed

Is there a network architecture and data flow diagram?

https://git.mobile2b.de/mybusiness-ai/infrastructure/-/raw/develop/kubernetes/mybusiness-ai-k8s.png

Do you review firewall settings? If so, how frequent? Do you have a deny all default setting then open up ports as needed?

Link

Do you utilize IDS/IPS systems for protection?

We have a brute-force protection that prevents additional login attempts after 5 failed logins within a 15 minute time period.

How do you ensure OS baseline compliance monitoring and remediation? Do you include client computers in your hardening processes? Are the administrative consoles hardened?

As part of our commitment to security and compliance, we are TISAX certified. While TISAX certification encompasses various aspects of information security, including those related to operating systems, it's important to note that our certification demonstrates our adherence to rigorous security standards, which includes measures to ensure operating system baseline compliance, hardening processes for client computers used by remote employees, and the security of administrative consoles managing our Kubernetes environment.

Do you have an up-to-date virus and malware protection installed on all systems?

Yes

  • see tisax
  • see gke


How do you ensure that development and consolidation systems are separated from productive systems; and how are changes from the development system transferred to the production system via the consolidation system, as well as how the test or consolidation system is supplied with test data?

For testing and development purposes, there are multiple separate clients, exclusively filled with manually generated dummy data (no productive data!), but otherwise providing all aspects for testing the functionality of the software.

Testers and developers do not have access to productive systems.

How do you ensure that the confidentiality level (internal/confidential/secret) is marked on relevant emails, documents, and external data storage?

Only image files (jpeg, png), MS files (.xlsx, .pptx), and PDFs are recommended for storage, although theoretically all file formats can be stored.

How do you ensure that the confidentiality level (internal/confidential/secret) is labeled on (digital) information (e.g. in PowerPoint presentations) and on mobile storage media (e.g. USB sticks, CDs, or external hard drives)?

No external storage media, as it is server-based and the server is not physically accessible. In principle, all file formats can be stored. However, only images (jpeg, png), MS files (.xlsx, .pptx), and PDFs are meaningful.

Do you implement a multi-layer architecture or install the application on a secure server with remote display capabilities for handling data with high or very high protection requirements?

All data except documents and photos (files) are stored in databases. Documents and photos are encrypted at-rest using 256 bit AES.

How do you ensure that the data is tagged with an authenticity feature to prove its authenticity for later processing?

No further authenticity features. Data is only used internally (interface through intermediate file).

How do you ensure that the collected data is authentic?

No further authenticity features. Data is only used internally (interface via intermediate file).

How do you ensure that end/system users are restricted to access only the data they are authorized for ("Need-To-Know")?

There are roles. These roles are assigned to individual users. The number and content of the roles can be flexibly adjusted by the admin.

How do you ensure that the implementation of the authorization concept is done server-side in multi-tier architectures (n-tier architectures)?

The entire application is server-based, therefore also the rights & roles model.

Do you process company data only on devices provided by the customer or certified devices (e.g. NPC)?

Login via AzureAD, therefore only possible with licensed devices. However, pinging the homepage is possible with all devices.

How do you ensure that you use the services or mechanisms available in the infrastructure to protect the integrity of the data?

We ensure data integrity by storing all data in databases (mariadb, mongodb, influxdb) and have backups in place in case of data loss or corruption.

How do you ensure that the authenticity of the data to be processed is verified?

We use Json Web Tokens (JWT) with H256 to transmit authenticated information. JWT are always verified for authenticity.

How do you ensure that data is stored securely in a restricted area with appropriate access rights or encrypted?

Data transfers from/to clients is encrypted with TLS. Persistent disks used by the hosted kubernetes service are encrypted at the hardware layer.

Do you store company data on network drives?

Files are stored on network drives and linked in the system.

How do you ensure that the data is transmitted within secured network zones or encrypted?

Incomming and outgoing data is always encrypted. Apart from the GCP In-Transit Encryption, inside the cluster there's no additional encryption.

How do you ensure that emails are deleted after their retention period?

See IT Security Manifesto.

How do you ensure that the log records contain sufficient information for evaluating a security event, including user identification, timestamp within a specified tolerance, access point, type of event, identification of the resource involved, and success/failure of the action?

Application logs are stored in Elasticsearch, which runs locally in the cluster. They do not store the mentioned events.

Login events are stored in DB.

K8s cluster control plane logs are handled by GKE service. This includes an audit log for the cluster inner workings which includes the mentioned events.

Log exchange with customer log solution or SIEM is not supported

How do you ensure that the IT system protects against unauthorized deletion?

K8s cluster audit logs can't be deleted or modified by an administrator.

How do you ensure that the IT system provides complete logging with:

  • Sufficient buffer for log data in case the log host is unreachable?
  • Warnings when the available storage space for logs is running low?

Application logs are stored for 60 days. There are alerts in place that fire when disk space is low. The docker daemon captures containers stdout and stderr streams and stores them in log files on the kubernetes host. If Elasticsearch is not available then the log shipper (filebeat) will stop reading new entries from the logs but the logs themselves will not disappear immediately so temporary downtime of Elasticsearch does not cause a loss of log data.

How do you ensure that known vulnerabilities of the application are promptly addressed through updates (security advisory patches) and/or changes in security settings, based on the criticality and exploitability of the vulnerability within specific timeframes as defined in the security concept ISMS-3b-014 Vulnerability Management?

Regular penetration tests of all subsystems are carried out by external security experts.

How do you ensure that the executables are protected against unauthorized disclosure through architectural measures?

User authentication on google cloud web console is handled by username and password prompt as well as 2FA. Google can challenge users for additional information based on risk factors such as whether they have logged in from the same device or a similar location in the past. After authenticating the user, the identity service issues credentials such as cookies and OAuth tokens that can be used for subsequent calls. Google Identity and access management service authorizes access to specific resources based on roles and permission policies.

How do you ensure that the application's runtime behavior is protected from indirect manipulation?

Do you develop the application according to the currently valid security standards described in the internal development guidelines?

Best practices measures are considered in software security development (e.g. consideration of OWASP Top 10, input validation, protection against XSS, CSRF, etc.).

How do you ensure that unauthorized users do not have write access to executables or detect manipulations to executables and prevent the execution of manipulated executables for customers with normal/high protection requirements?

Unauthorized personnel do not have write access to executables. Docker images are automatically scanned for issues and vulnerabilities upon push to the registry.

How do you ensure that all accesses to an application and outgoing data transmissions are disclosed and documented?

All accesses to the application and outgoing data transmissions are disclosed and documented through third-party providers in our architecture diagram.

How do you ensure that the authentication mechanism is protected against brute force attacks?

In the upcoming release, there will be a limit of 3 login attempts within 15 minutes from the same IP address.

How do you ensure that existing permissions can be efficiently identified in the system?

Permissions are granted through groups. Within the groups, the assigned users are listed in a way that allows for efficient verification.

How do you ensure that the application is prepared for productive use (hardening), including deactivating all unnecessary interfaces, using a communication protocol appropriate for the data sensitivity, removing all test users, and removing default usernames and passwords?

No test or default users are used in production environments, instead there is a concept of a support user that is only active for 15 minutes after activation by the customer.

How do you ensure that the application is designed as a multi-tier architecture or n-tier application (n>1) (recommendation: n=3)?

There is no N-tier architecture. All services and databases run on the same cluster and in the same namespace. Internally, the services communicate with each other over the HTTP protocol and with the databases - mysql, mongodb, and influxdb - using their respective db protocols.

How do you ensure that only the operations provided by the business logic at a given time and for which the user is authorized are available to the user to protect data from unauthorized changes?

Only roles are assigned. New roles can be created and assigned as needed.

How do you ensure that mechanisms are provided to revoke unused or compromised credentials in the application?

Revocation information is published through OCSP.

How do you ensure that the user can verify that the application is authentic?

The URL only contains the customer name.

How do you ensure that you implement organizational, procedural, and technical measures for regular analysis of traces and access logs of users and subcontractors to detect any abnormalities?

We are fully TISAX-certified.

Do you log the following events:

• Failed authentication attempts (if authentication is present) • Anomalies detected by the application • Application exceptions (crashes, unhandled exceptions, ...) • Access through maintenance interfaces

Failed login attempts are logged. Exceptions and crashes are logged. There is a brute-force protection that prevents additional login attempts after 5 failed logins in a 15 minute time period.

How do you ensure that every data export is logged?

Every data export as well as deletion process is logged in the environment. The individual events can be viewed in a list under "Account" > "Activity" by administrators. This list can be filtered by any date.

How do you ensure that applications use at least the local log service of the underlying platform?

Application logs are stored in local Elasticsearch. K8s cluster control plane logs are handled and stored by GKE service.

How do you ensure that activities of emergency users are logged and there is a documented retention period for the log data?

There are no emergency users, only support access if opted-in by the client's administrator.

Do you use appropriate procedures and sufficient key lengths in accordance with cryptographic algorithm requirements for generating certificates and key pairs?

RSA 2048-bit encryption is used.

How do you ensure that the generation of certificates and key pairs is done according to the specifications for creating certificates and key pairs (see cryptography concept)?

TLS certificates are issued by cert-manager, a cloud native client implementing the Let's Encrypt ACME protocol, which watches for specific annotations on ingress resources to issue certificates accordingly. RSA 2048-bit encryption is utilized.

How do you ensure that the storage areas used for credentials are overwritten once they are no longer needed by an application?

Each user is created as a user in the system. User authentication is done via Azure ID, with a session active for 15 minutes and then automatically refreshed. After seven days, users are automatically logged out of the system and must log in again.

The integration of additional interfaces is possible. For this, administrators can create, manage, and delete API keys under the "System Integration" menu.

How do you ensure that time-limited credentials or authenticity proofs are renewed in a timely manner?

User accounts are created in the system for each user. Users log in using Azure ID, where a session is active for 15 minutes and is automatically refreshed afterwards. Users are automatically logged out of the system after seven days and must log in again. Certificates are valid for 3 months and are refreshed when 30 days of validity remain.

Additional interfaces can be connected by creating, managing, and deleting API keys under the menu item "System Integration" by the administrator.

How do you ensure that the authenticity of the change request is verified before credentials are modified?

Admins can replace passwords, short-lived sessions of 15 minutes. In 2FA, the second factor must be re-entered (but currently not enforced). No admins via local accounts, only AD integration / ADFS.

How do you ensure that credentials are protected from interception during their transmission?

Credentials are protected from interception during transmission by ensuring that all incoming network traffic is encrypted, with TLS connections terminated on the ingress controller inside the cluster.

How do you ensure that credentials are deleted once they are no longer needed?

API keys are always hard-deleted.

How do you ensure that the validity of the credentials is revoked, if the type of credentials allows it?

Revocation information for certificates is published through OCSP, and they can be revoked using the command certbot revoke --cert-path /path/to/certificate --key-path /path/to/key.

How do you ensure that before permanently deleting credentials, it is verified that they are no longer needed to access stored or archived data?

Before permanently deleting credentials, it is verified that they are no longer needed to access stored or archived data.

Do you provide details about the components you use?

We have no control over the operating system used by the managed cloud service. For details on components such as databases, logging, monitoring, and ingress, please refer to the provided architecture model.

Do you have a security zone concept in place?

No dedicated namespace for individual customers, purely software-based logical data separation with customer signature ( identifier of the customer) and data access layer controls in the source code of the application.

How do you ensure communication relationships (including protocols and security concept)?

Connections to databases are established over the appropriate protocol for the specific database. Connections to databases are not encrypted.

Do you provide backup and restore functionality with defined Recovery Point Objective (RPO) and Recovery Time Objective (RTO)?

The application provides backup and restore functionality with defined Recovery Point Objective (RPO) and Recovery Time Objective (RTO).

How do you ensure that you have a backup process for your Information System that is limited in access, password-restricted, and tracked, with a tested restoration process in place?

The backup process for our Information System is limited in access, password-restricted, and tracked, with a tested restoration process in place.

Do you provide export interfaces for data?

A complete data export is possible via the REST API.

How do you ensure the availability in case of physical impairments?

Backups are stored only in S3. However S3 itself is redundant inside an AWS region.

How do you ensure the irretrievable deletion of all customer data after the end of the contract?

See IT Security Manifesto.

How do you ensure that customer data is protected from unauthorized read and write access?

We don't use cloud services for encryption.

How do you ensure the maintenance of data integrity?

See IT Security Manifesto.

Do you implement hardening measures for components such as disabling unnecessary services, deleting test users, and checking patches for security implications?

Do you adhere to security standards such as CIS Benchmarks, OWASP, BSI recommendations, NIST, and manufacturer recommendations? What security software do you use, such as IDS, IPS, AV, SIEM, TripWire?

We have implemented measures to strengthen the security of our Kubernetes cluster and applications following best practices outlined in CIS benchmarks.

How do you ensure that personal data is anonymised in the Cloud environments for development, integration, testing, and pre-production?

User data creation is fully in control of the client.

How do you ensure that you implement organisational, procedural, and technical measures for regular analysis of traces and access logs of users and subcontractors to detect any abnormality?

We are fully TISAX-certified.

How do you ensure the design of the rights/roles model?

"The applications are authorized to retrieve endpoints from the Kubernetes API, nothing else. Their RBAC roles are as follows:

Rules:

  • apiGroups:
    • """" resources:
    • endpoints resourceNames:
    • {service_name} verbs:
    • get

See IAM roles: https://docs.google.com/spreadsheets/d/1-Sdiy26Zm7OFP_GPR3iirsgMTH_Pu4Dh6dxdpQlfIJA/edit?usp=sharing"

How do you ensure that user access to the application is based on the authentication system, preferably SAMLv2 federated identity with IDP? If the solution does not support federated identity and manages accounts and passwords locally, how do you enforce the specific password security policy of the customer?

SSO fully supported, AzureAD, ADFS using SAML, any OAuth2.0 Provider

How do you ensure that the authorisation workflows for access to classified data are formalised by the project manager?

The project manager formalizes the authorisation workflows for access to classified data through a Full Role and Permission System implemented in the software.

How do you ensure that the solution allows automated user account management through a connector or API?

It is possible.

How do you ensure that the solution displays the date and time of the last user connection after the user's authentication to detect unauthorized account usage?

Authentication events are logged internally.

How do you ensure that sessions are automatically disconnected after a configurable period of inactivity and redirected to a dedicated "end of session" page?

Sessions are automatically disconnected after the AuthToken expires in 15 minutes and users are automatically redirected to the Login Page.

How do you ensure that the solution does not allow 2 or more simultaneous connections with the same user account?

The use of multiple clients in parallel is currently allowed.

How do you ensure that the agreed service levels are met?

We monitor cluster node host metrics, pod/container metrics, and application availability using external services.

Do you have requirements and procedures in place for vetting and auditing subcontractors?

There are confidentiality agreements between Mobile2b and other subcontractors.

Do you provide an overview of the cryptographic algorithms used and their parameterization?

We use bcrypt for password hashing. We use a 256-bit AES encryption for documents and photos.

How do you ensure the monitoring and maintenance of the security level of the cryptographic algorithms used?

No encryption. GKE, S3, and no data migration planned.

How do you ensure that authentication credentials/passwords meet specified requirements?

Do you have processes for password management?

New users receive a one-time login link via email and are forced to change their password on the first login, or single sign-on (SSO) via Active Directory can be used. Password reset is done by clicking a "Forgotten password" link and following the provided instructions.

How do you ensure that processes and technologies are in place for managing key material?

We are rotating AWS keys and secrets once a year.

How do you ensure timely handling of vulnerabilities and security incidents?

Static code analysis with SonarLint, TSLint, SonarQube, internal reviews, and pentests. Container images are scanned for vulnerabilities and issues automatically upon pushing to the registry.

How do you ensure the detailed logging as well as the technical and temporal availability of log files for the detection and investigation of security events/incidents?

See IT Security Manifesto. We guarantee detailed logging and the technical and temporal availability of log files for the detection and investigation of security events/incidents.

How do you ensure that IT security aspects are considered in the change management process?

Please refer to: https://docs.google.com/document/d/16j2bZR8bBYPwnSslx2CuW8aRjCW5S39Lfb-RKjE4xgg/edit#

How do you ensure that you implement organizational, procedural, and technical measures for regular analysis of traces and access logs of users and subcontractors to detect any abnormalities?

We are fully TISAX-certified.

Do you have processes in place for physical authorization management, authorization review, and physical access traceability?

Physical access control is managed by Google Cloud employees as the service is cloud-based, and no other individuals have physical access.

How do you ensure that you respect good security practices mentioned in international standards such as ISO 27xxx standards and OWASP guides? Can you provide a list of your certifications, application scope, and validity period?

n.a. mobile2b is not hosting the data itself; hosting partners are fully certified.

How do you ensure that the Service Provider is ISO27001 certified for projects involving critical data, especially personal data?

Data hosting partners are fully certified for ISO27001.

How do you ensure that a CISO is appointed at your company?

Hosting partners are fully certified.

Do you provide the information system security policy that you have implemented and inform customers of any changes in this policy?

We have implemented an information system security policy and will inform customers of any changes in this policy.

Do you ensure traceability in your app?

How do you ensure that the solution allows the exchange of logs with a log collector or SIEM of the Customer?

The process is described in a shared document with an assigned number.

Could you provide support for your app?

How do you ensure that state-of-the-art security measures are implemented in your developments, including following OWASP recommendations?

Regular developer training, internal best practices, and regular penetration testing by external parties according to OWASP are implemented to ensure state-of-the-art security measures are in place.

Do you conduct regular audit and penetration testing on your app to ensure its security?

How do you ensure that your solution and all its components are regularly audited, at least annually, as part of a continuous improvement process? Can we access the audit reports on the services used?

We are fully TISAX-certified.

How do you ensure that regular audits, including penetration tests and configuration audits, are conducted on the solution? Do you consider executing automatic solutions for intrusion tests on cloud services? How do you ensure that any detected gaps are corrected promptly and within a reasonable period according to their criticality?

We are open to collaborate for additional penetration tests even though we conduct regular penetration tests.


Was this helpful?

tisaxmade in Germany
© Copyright Mobile2b GmbH 2010-2024