TODO
TODO
TODO
TODO
We offer different Service Level Agreements (SLA) for any use case. TODO
See our service catalog TODO
We firmly believe that Mobile2b is one of the most secure places where you can store your data.
TODO
Because security is such an important topic for us, we have created a dedicated document on this that we call our IT Security Manifesto (German, English).
We also created a detailed breakdown of our security measures here: https://docs.google.com/spreadsheets/d/1orHIZ06gx5CX-AC7ww4xJZIC-xxvEYoV/
TODO
We offer a template for a Data Processing Agreement (German) that you can use. Alternatively, you can use the information provided in this template for your own documents.
List of Sub-Processors (English)
Technical and Organisational Measures (TOM) (German)
For any further information, please feel free to contact our data protection officer:
RA Friederike Scholz
Hohenstaufenring 58
50674 Köln
Germany
+49 221 420 424 54
scholz@ra-scholz.eu
Mobile2b is developed and maintained by:
Mobile2b GmbH
Im Mediapark 5
50670 Köln
Germany
+49 221 630 608 560
info@mobile2b.com
D-U-N-S® 34-232-7118
TODO
In the unlikely event that Mobile2b stops its operations (e.g. insolvency, liquidation, etc.) or stops to actively maintain the Mobile2b platform, we offer a Contingency Agreement (German) to our customers.
No matter what, you should feel confident that you can build your future digital business on top of Mobile2b.
Yes, according to our data protection policy, all stakeholders handling customer data have a NDA with Mobile2b.
Yes, we offer the option to integrate any SSO provider to log into Mobile2b.
Yes, since SSO Login is used all the users which are allowed to access the system are centrally managed in azure
Yes, RBAC is present. The permission of a role can be defined only by admins. Since SSO is utilizied the role a user gets can be matched based on the azure groups he is part of.
Example: if group equals mobile2b_admin
then the corresponding user will get the admin role in the environment.
TODO
TODO
Permissions set by admin role
Yes, see our user management documentation for the password policy.
Azure SSO is being used. Accounts are automatically created in the application if the azure user is part of a specific group. If the user is not part of that group then the login is revoked
Yes, this is a customer-managed user environment. While the web application can be accessed from devices managed by customer, it is not limited exclusively to customer-managed devices. Users may also access the application from non-customer managed devices.
Please check our IT Security Manifesto for details on our encryption policy.
Please check our IT Security Manifesto for details on our encryption policy.
Yes
Yes, data is logically separated from other customers' data. While data is stored in the same database as other customer data, it is logically segregated using account identifiers. Each customer's data is associated with a unique account identifier, ensuring that it remains distinct and inaccessible to other customers. The application layer is shared among all customers, but the data segregation at the database level ensures customer-specific privacy and security.
See the latest CIS benchmark results for Google Kubernetes Engine (GKE).
We utilize Google Container Analysis to detect malware in our container images before deployed https://cloud.google.com/artifact-analysis/docs/artifact-analysis We have network policies to control traffic between all our workload, limiting the potential spread of malware within the cluster We keep our Kubernetes clusters, nodes, and container images up-to-date with the latest security patches
Please relate to https://cloud.google.com/kubernetes-engine/docs/concepts/firewall-rules?hl=de
We regulary scan our container images and Kubernetes nodes for known vulnerabilities using tools like Google Container Analysis We utilize GKE’s automatic upgrades feature to ensure our control plane and nodes are regularly updated with the latest patches We gather feedback from stakeholders and review the patch management process regularly to identify areas for improvement We conduct regular penetration tests (please see related assessment)
We utilize json web tokens. Users have a session active for 15 minutes and then automatically refreshed. After seven days, users are automatically logged out of the system and must log in again
Yes
Firewall Protection: While we do not have a firewall on the ingress controller (every traffic is allowed), our infrastructure incorporates other security measures to safeguard against unauthorized access.
Yes, we implement access rights management within our Kubernetes environment. We leverage Google Cloud's RBAC, assigning limited rights to service accounts at the Kubernetes level. For example, our GitLab CI/CD pipeline utilizes a specific service account for image updates, while individual users have kubeconfig files with appropriate access levels based on their roles.
yes, Our application's data flow adheres to the 'need-to-know principle,' including the use of Mailgun for email sending. We transmit only essential data to Mailgun, ensuring security through encryption and secure protocols. Example:
Admins of the platform have the ability to configure the email details such as the subject, recipient, and content. This configuration is limited to what is necessary for the task of sending an email. When an admin decides to send an email, they are directly controlling the flow of their data out of our environment to the external email recipient. All actions related to data flow, including sending emails and deleting configurations, are logged for audit purposes. This ensures that there is a trail of user actions which can be reviewed for compliance.
Yes, It requires internet access for essential functionalities such as interacting with the Mailgun API. While there are no restrictions on internet access within our Kubernetes environment, our application itself does not access unnecessary external websites.
Yes, we are fully TISAX-certified.
Yes, we are fully TISAX-certified.
Yes, we are fully TISAX-certified.
Although Mobile2b is conducting regular Penetration test, we are open to collaborate with you for additional Penetration Tests. (Contact us for contractual details).
Please refer to our >SLA.
Yes, we are fully TISAX-certified.
We maintain audit logs for various activities such as exporting data, importing data, deleting, and updating objects, enabling us to track and review administrative actions.
While we currently do not utilize Privileged Access Management tools like CyberArk or a jump server, our audit logs capture administrative activities, including those performed by privileged users. These logs are regularly reviewed to ensure compliance and detect any unauthorized access or activities.
To prevent unauthorized data extraction, we rely on role-based access control (RBAC) with finely-tuned permissions. Access to sensitive data is restricted based on job roles and responsibilities, ensuring that employees only have access to the data necessary to perform their duties. Regular reviews and monitoring of access permissions further mitigate the risk of unauthorized data extraction.
Yes, we utilize Prometheus & Loki to gather metrics & logs from our Kubernetes cluster, including network activity and resource usage and application data. Grafana visualizes these metrics and triggers alerts via email and Slack when specific thresholds are exceeded, enabling us to promptly detect and respond to potential security incidents
Our hosting provider is Google Cloud Platform (GCP). Google Cloud is ISO 27001 certified. As for SOC 2 Type 2 audit reports, Google Cloud typically undergoes regular audits to assess the effectiveness of their controls and processes
See IT Security Manifesto: Encryption
No
See >IT Security Manifesto: At-rest encryption
customer admins are able to see (limited) audit logs:
See IT Security Manifesto: Backup and recovery
Yes
See IT Security Manifesto: Backup and recovery
Yes see Link
Annual supplier/provider reviews as part of TISAX.
See IT Security Manifesto: Data deletion
Please review topic "Datenlöschung" at the Security Manifest: https://docs.google.com/document/d/1PDrrE63jA9AeSNuO5Xrnqmij6Dq1hR3ow2uJnL0UNcY/edit#heading=h.e46gnbwr79jn
We utilize Google Container Analysis to scan vulnerabilities in our Docker images at the Kubernetes level, enhancing our security measures and maintaining the integrity of our containerized applications Additionally, we conduct regular penetration tests
Deployments, including updates and patches, are automatically applied to customer's environment, eliminating the need for manual user intervention
https://git.mobile2b.de/mybusiness-ai/infrastructure/-/raw/develop/kubernetes/mybusiness-ai-k8s.png
We have a brute-force protection that prevents additional login attempts after 5 failed logins within a 15 minute time period.
As part of our commitment to security and compliance, we are TISAX certified. While TISAX certification encompasses various aspects of information security, including those related to operating systems, it's important to note that our certification demonstrates our adherence to rigorous security standards, which includes measures to ensure operating system baseline compliance, hardening processes for client computers used by remote employees, and the security of administrative consoles managing our Kubernetes environment.
Yes
For testing and development purposes, there are multiple separate clients, exclusively filled with manually generated dummy data (no productive data!), but otherwise providing all aspects for testing the functionality of the software.
Testers and developers do not have access to productive systems.
Only image files (jpeg, png), MS files (.xlsx, .pptx), and PDFs are recommended for storage, although theoretically all file formats can be stored.
No external storage media, as it is server-based and the server is not physically accessible. In principle, all file formats can be stored. However, only images (jpeg, png), MS files (.xlsx, .pptx), and PDFs are meaningful.
All data except documents and photos (files) are stored in databases. Documents and photos are encrypted at-rest using 256 bit AES.
No further authenticity features. Data is only used internally (interface through intermediate file).
No further authenticity features. Data is only used internally (interface via intermediate file).
There are roles. These roles are assigned to individual users. The number and content of the roles can be flexibly adjusted by the admin.
The entire application is server-based, therefore also the rights & roles model.
Login via AzureAD, therefore only possible with licensed devices. However, pinging the homepage is possible with all devices.
We ensure data integrity by storing all data in databases (mariadb, mongodb, influxdb) and have backups in place in case of data loss or corruption.
We use Json Web Tokens (JWT) with H256 to transmit authenticated information. JWT are always verified for authenticity.
Data transfers from/to clients is encrypted with TLS. Persistent disks used by the hosted kubernetes service are encrypted at the hardware layer.
Files are stored on network drives and linked in the system.
Incomming and outgoing data is always encrypted. Apart from the GCP In-Transit Encryption, inside the cluster there's no additional encryption.
Application logs are stored in Elasticsearch, which runs locally in the cluster. They do not store the mentioned events.
Login events are stored in DB.
K8s cluster control plane logs are handled by GKE service. This includes an audit log for the cluster inner workings which includes the mentioned events.
Log exchange with customer log solution or SIEM is not supported
K8s cluster audit logs can't be deleted or modified by an administrator.
Application logs are stored for 60 days. There are alerts in place that fire when disk space is low. The docker daemon captures containers stdout and stderr streams and stores them in log files on the kubernetes host. If Elasticsearch is not available then the log shipper (filebeat) will stop reading new entries from the logs but the logs themselves will not disappear immediately so temporary downtime of Elasticsearch does not cause a loss of log data.
Regular penetration tests of all subsystems are carried out by external security experts.
User authentication on google cloud web console is handled by username and password prompt as well as 2FA. Google can challenge users for additional information based on risk factors such as whether they have logged in from the same device or a similar location in the past. After authenticating the user, the identity service issues credentials such as cookies and OAuth tokens that can be used for subsequent calls. Google Identity and access management service authorizes access to specific resources based on roles and permission policies.
Do you develop the application according to the currently valid security standards described in the internal development guidelines?
Best practices measures are considered in software security development (e.g. consideration of OWASP Top 10, input validation, protection against XSS, CSRF, etc.).
Unauthorized personnel do not have write access to executables. Docker images are automatically scanned for issues and vulnerabilities upon push to the registry.
All accesses to the application and outgoing data transmissions are disclosed and documented through third-party providers in our architecture diagram.
In the upcoming release, there will be a limit of 3 login attempts within 15 minutes from the same IP address.
Permissions are granted through groups. Within the groups, the assigned users are listed in a way that allows for efficient verification.
No test or default users are used in production environments, instead there is a concept of a support user that is only active for 15 minutes after activation by the customer.
There is no N-tier architecture. All services and databases run on the same cluster and in the same namespace. Internally, the services communicate with each other over the HTTP protocol and with the databases - mysql, mongodb, and influxdb - using their respective db protocols.
Only roles are assigned. New roles can be created and assigned as needed.
Revocation information is published through OCSP.
The URL only contains the customer name.
We are fully TISAX-certified.
• Failed authentication attempts (if authentication is present) • Anomalies detected by the application • Application exceptions (crashes, unhandled exceptions, ...) • Access through maintenance interfaces
Failed login attempts are logged. Exceptions and crashes are logged. There is a brute-force protection that prevents additional login attempts after 5 failed logins in a 15 minute time period.
Every data export as well as deletion process is logged in the environment. The individual events can be viewed in a list under "Account" > "Activity" by administrators. This list can be filtered by any date.
Application logs are stored in local Elasticsearch. K8s cluster control plane logs are handled and stored by GKE service.
There are no emergency users, only support access if opted-in by the client's administrator.
RSA 2048-bit encryption is used.
TLS certificates are issued by cert-manager, a cloud native client implementing the Let's Encrypt ACME protocol, which watches for specific annotations on ingress resources to issue certificates accordingly. RSA 2048-bit encryption is utilized.
Each user is created as a user in the system. User authentication is done via Azure ID, with a session active for 15 minutes and then automatically refreshed. After seven days, users are automatically logged out of the system and must log in again.
The integration of additional interfaces is possible. For this, administrators can create, manage, and delete API keys under the "System Integration" menu.
User accounts are created in the system for each user. Users log in using Azure ID, where a session is active for 15 minutes and is automatically refreshed afterwards. Users are automatically logged out of the system after seven days and must log in again. Certificates are valid for 3 months and are refreshed when 30 days of validity remain.
Additional interfaces can be connected by creating, managing, and deleting API keys under the menu item "System Integration" by the administrator.
Admins can replace passwords, short-lived sessions of 15 minutes. In 2FA, the second factor must be re-entered (but currently not enforced). No admins via local accounts, only AD integration / ADFS.
Credentials are protected from interception during transmission by ensuring that all incoming network traffic is encrypted, with TLS connections terminated on the ingress controller inside the cluster.
API keys are always hard-deleted.
Revocation information for certificates is published through OCSP, and they can be revoked using the command certbot revoke --cert-path /path/to/certificate --key-path /path/to/key.
Before permanently deleting credentials, it is verified that they are no longer needed to access stored or archived data.
We have no control over the operating system used by the managed cloud service. For details on components such as databases, logging, monitoring, and ingress, please refer to the provided architecture model.
No dedicated namespace for individual customers, purely software-based logical data separation with customer signature ( identifier of the customer) and data access layer controls in the source code of the application.
Connections to databases are established over the appropriate protocol for the specific database. Connections to databases are not encrypted.
The application provides backup and restore functionality with defined Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
The backup process for our Information System is limited in access, password-restricted, and tracked, with a tested restoration process in place.
A complete data export is possible via the REST API.
Backups are stored only in S3. However S3 itself is redundant inside an AWS region.
We don't use cloud services for encryption.
Do you adhere to security standards such as CIS Benchmarks, OWASP, BSI recommendations, NIST, and manufacturer recommendations? What security software do you use, such as IDS, IPS, AV, SIEM, TripWire?
We have implemented measures to strengthen the security of our Kubernetes cluster and applications following best practices outlined in CIS benchmarks.
User data creation is fully in control of the client.
We are fully TISAX-certified.
"The applications are authorized to retrieve endpoints from the Kubernetes API, nothing else. Their RBAC roles are as follows:
Rules:
See IAM roles: https://docs.google.com/spreadsheets/d/1-Sdiy26Zm7OFP_GPR3iirsgMTH_Pu4Dh6dxdpQlfIJA/edit?usp=sharing"
SSO fully supported, AzureAD, ADFS using SAML, any OAuth2.0 Provider
The project manager formalizes the authorisation workflows for access to classified data through a Full Role and Permission System implemented in the software.
It is possible.
Authentication events are logged internally.
Sessions are automatically disconnected after the AuthToken expires in 15 minutes and users are automatically redirected to the Login Page.
The use of multiple clients in parallel is currently allowed.
We monitor cluster node host metrics, pod/container metrics, and application availability using external services.
There are confidentiality agreements between Mobile2b and other subcontractors.
We use bcrypt for password hashing. We use a 256-bit AES encryption for documents and photos.
No encryption. GKE, S3, and no data migration planned.
New users receive a one-time login link via email and are forced to change their password on the first login, or single sign-on (SSO) via Active Directory can be used. Password reset is done by clicking a "Forgotten password" link and following the provided instructions.
We are rotating AWS keys and secrets once a year.
Static code analysis with SonarLint, TSLint, SonarQube, internal reviews, and pentests. Container images are scanned for vulnerabilities and issues automatically upon pushing to the registry.
See IT Security Manifesto. We guarantee detailed logging and the technical and temporal availability of log files for the detection and investigation of security events/incidents.
Please refer to: https://docs.google.com/document/d/16j2bZR8bBYPwnSslx2CuW8aRjCW5S39Lfb-RKjE4xgg/edit#
We are fully TISAX-certified.
Physical access control is managed by Google Cloud employees as the service is cloud-based, and no other individuals have physical access.
n.a. mobile2b is not hosting the data itself; hosting partners are fully certified.
Data hosting partners are fully certified for ISO27001.
Hosting partners are fully certified.
We have implemented an information system security policy and will inform customers of any changes in this policy.
The process is described in a shared document with an assigned number.
Regular developer training, internal best practices, and regular penetration testing by external parties according to OWASP are implemented to ensure state-of-the-art security measures are in place.
We are fully TISAX-certified.
We are open to collaborate for additional penetration tests even though we conduct regular penetration tests.