As you all probably know, EASI already offers you a solution to backup your Virtual Machines into our cloud. But what about your user endpoints? What if one of your employee's laptop gets stolen? What if a cryptovirus infects the endpoint?
Thanks to Safe2be, you don't need to worry about those questions... We do have a solution to secure your data and even track your stolen laptop down!
Let me explain from a technical point of view, how this solution works, and what are the benefits.
Efficient data capture, leveraging source-side deduplication, opportunistic scheduling, and bandwidth throttling for transparent endpoint data protection.
Traffic between client and server can be throttled for all operations between those two endpoints. Optionally, you can define a schedule for when this applies. Throttling will make sure that only the maximum allowed bandwidth is used for backup so other traffic is not impacted.
Typically 8 to 24 hours (i.e., Daily). The frequency of protection is a range that sets the minimum and maximum thresholds that a client will be protected by the Opportunistic Scheduling feature. It means that the scheduler will coordinate the software when a backup needs to be done during a specific time frame.
Data deduplication is a method of reducing storage needs and speed up the backup process by eliminating redundant data. Only one unique instance of the data is actually retained on storage media. Redundant data is replaced with a pointer to the unique data copy.
For example, a user has a typical file that requires 10 MB storage space on their desktop and wants to backup this file. Nine other users have the same file that needs to be backed up. With data deduplication, only one instance of the file is actually stored on backup media; each subsequent instance is just referenced back to the one saved copy. In this example, a 100MB storage demand as backup could be reduced to only 10MB. Not only the storage demand is reduces, but also the speed of the backup will be significantly faster.
Once the files has been backed up by one client, the other clients just need to verify the same datablocks and don't have to upload the same file again.
The following is the general workflow for deduplication:
A block of data is read from the source and a unique signature for the block of data is generated by using a hash algorithm.
Data blocks can be compressed (default), encrypted (optional), or both. Data block compression, signature generation, and encryption are performed in that order on the source or destination host.
The new signature is compared against a database of existing signatures for previously backed up data blocks on the destination storage. The database that contains the signatures is called the Deduplication Database (DDB).
If the signature exists, the DDB records that an existing data block is used again on the destination storage. The index information is written to the DDB on the destination storage, and the duplicate data block is discarded. If the signature does not exist, the new signature is added to the DDB. Both the index information and the data block is written to the destination storage.
Because of the deduplication capabilities, existing data will not be transferred between client and server. A backup is taken within minutes, even seconds if there is little changed data to be backed up. Backups are taken in an incremental way because of this.
A user is not bothered with maintenance or technical tasks regarding backups. The software is fully automated to do the necessary things for taking backups automatically on time, it just runs in background. The user interface is simple to use and has a minimum footprint on the desktop or laptop. the majority of tasks are done via a web interface that is hosted on the server side.
Multiple recovery options, including administrator-driven and user self-service file-level recovery options via a mobile app iOS or Android, Windows Explorer integration, or a Web console are available.
An administrator is able to restore specific user data if necessary. On the other hand, a user has the possibility to recover data himself via a self-service portal so it reduces the administrator's workload.
Secure data transfer without a VPN connection using secure, encrypted backup streams over SSL. The server generates an SSL certificate when new clients join the environment to provide an extra level of security ensuring no spoofing or rogue access to data. Data will encrypt at source en decrypts before written to storage on server side via a backup stream over SSL.
Virtually anywhere, anytime access for users to view, download, and edit protected data for increased productivity. Multiple restore points are kept on server level so a user can go back in time to a specific version of a file. Accidental removal or unwanted changes are rolled back in no time.
Last but not least, the multi-client support for Windows, Mac, and Linux enables backup & recovery to work seamlessly across any platform.