SecureBlackbox 16: Securing Your Client-Server or Multi-Tier Application

Note: This article applies only to SecureBlackbox Legacy. For future development please consider using the latest version.

This article outlines how encryption solves problems in information security and trust for distributed applications.

Contents

The data security problem in multi-tier, client-server, and network applications

The main security threats are listed below.

Unauthorized data access

Unauthorized access to data can lead to confidential information being publicly disclosed or used against its owner. Companies and private users use open communication channels for data transfer. So, data transfer over such channels is in extreme need of protection in order to preserve confidentiality. Some possible causes of unauthorized access to secret data are below:

  1. Network traffic transfer in clear (not encrypted) form
  2. Absence of authorization mechanisms for accessing to secret data
  3. Absence of access isolation mechanisms

Unauthorized data modification

This security threat manifests itself where data can be changed or deleted accidentally or intentionally by a person who does not have permission to access the data. Unauthorized data modification damages data integrity and can have an influence on the information that is not directly linked with the modified data. Such modifications are especially dangerous since they can escape attention for a long time.

Below are some of the possible causes of unauthorized modification:

  1. Absence of data integrity verification in software
  2. Password sharing or leakage
  3. Easily-guessed passwords
  4. Passwords stored in easily accessible places
  5. Identification and authentication schemes are absent or weak

Data encoding and encryption

One of the most necessary steps towards data protection is data encryption. Encryption is the process of transforming the data into some sequence of bytes using an encryption algorithm. The primary goal of encryption is to hide the data from being visible and accessible without having a key. Very often, the protection of data is performed in a way where the algorithm for transforming the data remains unknown. In other words, the author of such "protection" thinks that if the algorithm is not known, the data is properly protected. This is not encryption, but encoding. Revealing the algorithm leads to an easy defeat of such "encryption." And the algorithm can be discovered from the software that uses such encoded data. Sometimes it is possible to discover the data without even knowing the details of the algorithm.

Encryption is the end result of encryption algorithms. These algorithms are well known and have been carefully analyzed by cryptography specialists and mathematicians. The strength of such algorithms has been tested and proven again and again. The only secret part in the encryption is the key used to encrypt and/or decrypt the data.

The level of protection is determined not only by the algorithm itself, but also in the way that the algorithm is applied. Internet security protocols, for example, take special care about how the keys are created and used.


Symmetric encryption algorithms

In symmetric encryption, the same algorithms and keys are used for information decryption and encryption. Another name is also used — secret-key cryptography. Critically, the data is only as secure as the place where the key is stored; another threat is that it is not impossible for a hacker to access and read the encrypted data without a key.

Fig. 1. A symmetric key is used for both encryption and decryption.


One of the advantages of this method is that you need to keep safe only the key and not the whole data. The key size does not depend on the size of the encrypted data. But, such an encryption method becomes useless when you need to pass data over open communication channels. If you transfer the secret key over the same channel there is no sense in encryption (everyone who can get the information can get the key as well). And, if you have a channel secure enough to pass the key, you can use it to transfer the data itself without encryption. Special key-exchange algorithms are used to solve this problem and discussed later.


Key creation

As almost any sequence of bytes can be used as a key (assuming that the sequence length corresponds to the requirements of the algorithm), random-number generators are used to create the key. The main task during key generation is to create a unique key, since security depends on the uniqueness of the key: the better the generator, the less it is possible that someone will be able to guess what numbers will be generated next. To check how good a generator is and if the sequence it generates is really random, cryptography specialists use statistical tests for randomness.

Random-number generators

A truly random number can be generated only by using special devices. Such generators get unrehearsed data from the environment -- parameters of radioactive decay, surrounding atmospheric conditions, minor changes in electric current, etc. -- such that it is practically impossible to replicate the conditions based on which the random number was generated.

Such generators are good enough; an alternative is to get the random data from computer input devices such as a mouse (by asking the user to move the mouse for some time).

Pseudo-random-number generators

A pseudorandom number is generated in two steps:
  1. The program gets some parameters that change with time, for example, the system time, cursor position, etc.
  2. The program calculates the digest or a hash function. The digest calculation algorithm creates a new sequence of bytes according to the data given. If we use the same parameters as input data for the algorithm, we will get the same digest. But, as soon as we change one bit in the input data, the digest changes.

    Why perform the second operation when we already obtained random numbers during the first step? Such parameters as the time or the cursor position can be easily enumerated and tested one by one. So most of this data, without further processing, cannot be called truly random.

    Not every hash calculation algorithm is usable for cryptography purposes — only specially designed digest (hash) algorithms. Several hash algorithms are popular today:

    MD2: Ron Rivest first created the message digest (MD) algorithm; MD2 is the next, improved version. This algorithm returns a 128-bit digest, so the number of possible variants is 2128. Unfortunately, some gaps were later found in this algorithm and it is no longer recommended for use.

    MD5: After several unsuccessful attempts to create other algorithms as MD3 and MD4, Ron Rivest offered the popular MD5. This algorithm is faster and more secure than MD2 and it creates a 128-bit digest too.

    SHA-1: This algorithm is like MD5 (Ron Rivest also contributed to SHA-1) but it has a better internal structure and returns a longer digest of 160 bits. It was not only approved by cryptanalysts but also preferred over MD5 by the cryptography community. However, recently it was discovered that the SHA-1 algorithm can be attacked, so a stronger algorithm (such as SHA2) should be used, if possible.

    SHA-2: This algorithm supports hash lengths of 256, 384, and 512 bits and is the most preferred algorithm at the moment.


    Block and stream encryption in symmetric algorithms

    The key discussed above is used in two types of symmetric encryption algorithms: block algorithms and stream algorithms.

    Block encryption

    This algorithm splits data into blocks and encrypts each block separately with the same key. If the data size is not a multiple of the required block size, then the last block will be enlarged up to the necessary size and filled with some value. When encrypting with block algorithms, if you encrypt the same data with the same key you will get identical results. Usually, such algorithms are used for files, databases, and email messages. There can be variations when the key for the next block is based on the output of previous blocks.

    Stream encryption

    Unlike block encryption, such algorithms encrypt each byte separately. For the encryption, the pseudorandom numbers are generated based on the key. The result of the encryption of the bytes usually depends on the result of the encryption of the previous byte. This method has high productivity and is used for encrypting information that is transferred over a communication channel.


    Attacks on encrypted information

    There are two ways to restore encrypted information. You can either try to find the key or exploit the algorithms' vulnerabilities.

    Enumerating the key: No matter what algorithm is used, it is always possible to decrypt the data by trying all possible keys one by one. This is called a "brute-force attack." The only problem here is the time that must be spent for the exhaustive search. So, the longer the key, the better protected the data is. For example, an exhaustive search of keys with a 128-bit length will take several trillions of millenniums. Of course, with computer productivity increasing, the search time is reduced but for the time being a 128-bit key will stay secure enough.

    Exploiting algorithm vulnerabilities: Unlike the previous method, this one is based on the discovery and Exploiting algorithm vulnerabilities. In other words, if the attacker can find some regularity in the encrypted text or if he can bypass the protection in some other way, then he will decrease the time required to find out the key or decrypt the data. As most encryption algorithms are published, cryptanalysts all over the world are trying to find vulnerabilities. As long as such vulnerabilities have not been found in these popular algorithms, they can be accepted as secure.


    As shown below, there are many different encryption algorithms that you can choose from. When choosing symmetric algorithm, the speed and length of the key are usually taken into account.

    • DES (Digital Encryption Standard): A block algorithm that uses a 56-bit key. This algorithm was designed in the late seventies by researchers from IBM and the NSA (National Security Agency). This algorithm was investigated thoroughly and experts came to the conclusion that it has no weak points. This was 80 years into the last century; in the nineties, increased processing speeds enabled a complete Enumerating the key to defeat this algorithm. In 1999, the Electronic Frontier Foundation decrypted DES-encrypted information in less than 24 hours.

    • AES (Advanced Encryption Standard): The result of a NIST (National Institute of Standards and Technology) contest for a new algorithm. One of the main terms was that developers must renounce their intellectual property rights. This made it possible to create a standard that can be used universally and without royalties. All candidate algorithms were investigated thoroughly by the world community and NIST announced the winners' names on October 2, 2000. The authors are two Belgian researchers: Vincent Rijmen and Joan Daemen. Since then, this algorithm has become the world's cryptography standard supported by the most applications.

    • Blowfish: By Counterpane Systems company. Safer than Cylink, RC2 and RC5 by RSA Data Security corporation, IDEA by Ascom, and Cast by Entrust — other algorithms developed by different cryptography companies.

    Asymmetric (public key encryption) algorithms

    Secret-key algorithms can encrypt data, but they are hard to use when you need to pass encrypted data to someone else because you need to pass the key too. If you transfer the key over public channels, it is the same as if you transferred clear data over this channel. The solution to this problem is in using asymmetric cryptography (encryption with public key), which was developed in the 1970s.

    While symmetric cryptography is based on the principle that one key is used for encryption and decryption, in asymmetric cryptography one key is used for encryption and another one for decryption. These keys make a pair. Keys from different pairs will never match each other.


    An asymmetric key consists of two parts — one for encryption and another for decryption.

    Fig. 2. Parts of an asymmetric key

    One key is called private and only its owner must have access to this key. It must be kept secret. The second key is called public and is not a secret. Everyone can use your public key. Suppose you want to encrypt some data for another person. All you have to do is to encrypt this data with his or her public key. Now, no one but this person will be able to read this data. Even you cannot decrypt it back (for example, if you have deleted the original information). So, if you want to get important information you have to generate two keys. You store the private key in a secure place and you distribute the public key in any way; for example, you can place your public key on your website. Now anyone can send you secret data encrypted with the public key you provided. You just have to use your private key in order to decrypt the data.

    Encryption with a public key has one disadvantage: The asymmetric algorithm works much slower than the symmetric one. So, when large amounts of secret data are transferred, they are encrypted with symmetric algorithms (using a symmetric key) and then the key that was used is encrypted with an asymmetric algorithm using a public key. Thus, the encryption is quick enough as the symmetric algorithm is used, and there is no need to transfer a secret key as "clear text". Usually, each symmetric key is used only once and when the next document is encrypted, a new secret key is generated. As the symmetric key is used only in one encryption session, it is often called as a session key. The use of the session key is transparent to the user: the user only provides the public key for encryption and the user's software performs the rest of the steps.


    After the data encryption, the symmetric key is encrypted with the open key and merged with the encrypted data.

    Fig. 4. Encryption with a session key

    Asymmetric encryption systems are based on some one-sided mathematical functions; if you know the result, you cannot renew the input data. For instance, if you have a sum of two numbers you cannot tell exactly which numbers were added.

    Public key algorithm security

    As discussed above, there are two possible ways to restore encrypted data: to find the key or to exploit vulnerabilities in the algorithm.

    Enumerating the key: If the message is encrypted as described above, we have two parts: the message itself, encrypted with the (symmetric) session key and the session key encrypted with the public key. We have already discussed the attacks on symmetric algorithms and keys; the discovery of an asymmetric private key is an even more complicated task because asymmetric keys are much longer than symmetric keys.

    Attackers can try to exploit the fact that only one private key corresponds to a known public to find the private key. But such an attack takes even more time. The point is that such an attack involves the decomposition of a large number into factors. Currently, there are no efficient algorithms that allow such calculations infinite time. So, until such an algorithm is developed, cryptography with open keys can be reckoned as secure.

    Exploiting algorithm vulnerabilities: This method of attack probably is the most efficient in relation to open keys. The fact is that today there are no public key algorithms that have no weak points. For all asymmetric algorithms there are methods that allow recovery of the key faster than with direct enumeration. But this fact is not critical, since it has been proved that, even exploiting the weak points, an attack will take too much time. And, the probability to be lucky enough to find the correct value soon early tends to zero. So asymmetric encryption can be treated as secure enough for all modern practical purposes. The most important point is that the longer the key you use, the better your data is protected.


    • DH (Diffie-Hellman): Stanford graduate Whitfield Diffie and Professor Martin Hellman researched cryptographic methods and key exchange problems. As a result of this work, they offered a scheme allowing the creation of a common secret key based on open information interchange. This scheme does not encrypt anything; it only makes it possible for two (or more sides) to generate a secret key that will depend on all members' information but will not be known to any third party.

      This algorithm is not used for encryption; its aim is to generate a secret session key. Each interacting side has a secret number, and there are also several public parts known to all members that can be transferred over open channels. To get a secret session key, these public parts must be combined with the secret ones.


      Fig. 4. Diffie-Hellman algorithm. One secret value is created using different keys.


    • RSA: After Diffie and Hellman published their article in 1976 Ron Rivest took an interest in this idea. In 1979, Rivest, along with Adi Shami and Len Adleman, his colleagues at MIT, published a new algorithm in 1978 named by the authors' initials. This algorithm is often used with a 1024-bit or 2048-bit key and it has become quite widespread.

    • ECDH (Elliptic Curve Diffie-Hellman): Neil Koblitz and Victor Miller working independently in 1985 came to the conclusion that a little-known field of mathematics, elliptic curves, can be useful in public-key cryptography. Algorithms based on elliptic curves began to spread in the nineties and today they are listed in some countries as information security standards.


    Certificates

    After the application exchanges the keys, it can encrypt the data being sent, ensuring the confidentiality of the data. But, can you be sure that the application will send the data exactly where it has to? The attacker could substitute the real server for his own and just send his key during the key exchange. And, how can you be sure that the message you received is from the person you think to have sent it? Using digital certificates, you can solve problems in authentication, such as proving a message's authenticity.

    Digital signatures are used to confirm message authorship. As discussed above, to encode a message so that only one person can read it, you have to encrypt the message with this person's public key. Such a message can be decrypted only with the recipient's private key. But what will happen if you encrypt the message with your private key? It could be read by anyone who has your public key, so it will not be secret at all. At the same time, nobody else will be able to encrypt the data in a way that other people can decrypt it with your private key. So, only you can do the encryption of the data and anyone who reads the message will be sure that the message was sent by you. Since public-key algorithms are slow, it is not feasible to encrypt the whole message in such a way. Only the message digest is encrypted with your private key instead.

    This procedure consists of two steps:

    1. You calculate the message digest and encrypt it with your private key. When sending a message, you attach the encrypted digest to it.
    2. The recipient calculates the message digest using the same algorithm as you did, decrypts the attached digest, and compares them. If the two digests are equal, then he or she can be sure that message was sent by you and was not altered during transfer.

    How can we be sure that the public key we have really belongs to a certain person? Somebody could try to break into the server with public keys and replace your partner's with his or her own.

    Digital certificates are used for authentication purposes.

    In brief, a certificate can be represented as a number of records containing information about its owner and certain cryptographic information. The owner information is usually human readable, for example, the name or passport data. The cryptographic information consists of the public key and the digital signature of a certificate authority (CA). This signature confirms that the certificate belongs to the person whose name is specified in the certificate.

    Another question arises: How can we know that the signature belongs to the CA? Presumably it must have its own certificate, confirming its public key. Self-signed certificates are used for such purposes. The self-signed certificate is signed with its owner's digital signature. You yourself can create a self-signed certificate. But, this does not mean that other people will trust such a certificate. Correspondingly, you also should not trust most self-signed certificates except the root CA's self-signed certificate.

    Another typical use for self-signed certificates is within the enterprise: If you create a self-signed certificate, you can use it to sign other certificates. For instance, you can generate certificates for all company employees (and for them only). This practice allows you not only to get as many certificates as you need without spending much but also to increase the level of security inside of your company. Certificates can be used not only by people but also by applications. It can be especially useful when information is transferred over open channels between applications.

    If you develop a complex software application and want to protect transferred data, most likely you will have to create a certificate infrastructure. Using a certificate client, applications can check that they have connected to the server they planned. At the same time, the server applications can check if the client has the rights to connect to it.

    If you think that the support of certificates is a complicated task, you don't need to worry. There exist several reusable security libraries that help you to deal with certificate management. One of these products is SecureBlackbox. The main task when integrating certificate support into your application is to do everything with security in mind and not to make mistakes in order to avoid security flaws. The best is of course to involve security specialists in to the process.

    The most commonly used standard for certificates today is X.509. It describes the certificate format and distribution principles. There exist other certificate formats used in different communication protocols.


    Secure transport protocols

    As the internet has grown, secure data transfer has become a necessity. One of the first engineering solutions was SSL (Secure Socket Layer), developed by Netscape in 1994. Its use is now widespread and it is integrated into most browsers, web servers, and other software and hardware systems dealing with the internet. There are several modifications of this protocol today: SSLv2, SSLv3, and TLSv1. The most popular is TLSv1. SSLv2 is not used due to the discovery of several vulnerabilities.

    Secure Socket Layer (SSL) is a protocol for authentication and encryption at the session level and represents a secured communication channel between two sides (client and server). SSL provides confidentiality by generating a common secret for the client and server. SSL supports server authentication and optional client authentication in order to resist outside interference, message substitution, and eavesdropping on client-server applications. SSL is located at the transport level (lower than the application level). Most application-level protocols (such as HTTP, FTP, TELNET, and so on) can be run transparently over SSL.

    See the following steps to gain a better understanding of the principles of SSL functionality through a simplified look at the client and server communication scheme.

    1. The client composes a client hello message before establishing the connection. This message contains information about supported protocol versions, encryption methods, a random number, and the session identifier. After that the message is sent to the server.

    2. The server can answer either with another hello message or with an error message. The server hello message is like the client one, but the server selects the encryption method that will be used based on information it received from the client.

    3. The server can send its certificate or certificate chain (several certificates where all but one of the certificates are signed by other certificates) for authentication after its hello message has been sent. Authentication is required for the key exchange except when using the Anonymous Diffie-Hellman algorithm. The key exchange can be realized with the help of certificates corresponding to the encryption algorithms specified during the establishment of the connection. Usually, X.509.3 certificates are used.

    4. The client obtains the server's public key, which can be used for session key encryption at this stage.

    5. After the certificate is sent, the server can optionally create a certificate request message to request the client certificate if necessary.

    6. The server can send its certificate or certificate chain (several certificates where all but one of the certificates are signed by other certificates) for authentication after its hello message has been sent. Authentication is required for the key exchange except when using the Anonymous Diffie-Hellman algorithm. The key exchange can be realized with the help of certificates corresponding to the encryption algorithms specified during the establishment of the connection. Usually, X.509.3 certificates are used.

    7. The client obtains the server's public key, which can be used for session key encryption at this stage.

    8. After the certificate is sent, the server can optionally create a certificate request message to request the client certificate if necessary.

    9. After the last hello message, the server sends the handshake completion message. When the client receives such a message, it must check the server certificates and send a finalizing message that specifies that the handshake was completed. Now the sides can start the encrypted data exchange.

    10. Both server and client can send finalization (goodbye) message before the end of the communication session. After such a message is received, a similar message must be sent in response, and the connection is closed. Finalization messages are needed for protection from a breaking-down attack. If this message was sent before connection shutdown the client can resume this session later — resuming the session takes less time than the establishment of a new session.

    It is also necessary to mention the SSH (Secure Shell) protocol. This protocol resembles SSL in general but has some differences. SSH was designed for message exchange between servers with UNIX and it requires the authentication of both sides. SSH supports logical channels inside one secured session. SSH uses key pairs and not certificates for authentication.

    Secure transport protocols are an effective, tested, and widely used means for data transfer over public communication channels. The SSL protocol is an efficient solution for the development of secured client-sever applications that must use open communication channels. But, what you have to take into account is that SSL provides data encryption only during transfer, and the data becomes accessible in unprotected form on the client and server. So, security must be comprehensive and well designed. And communication channels must not be the only secured element.


    Security in client-server and network applications

    The following sections show cryptography in action, applying the main principles of cryptographic protection discussed above.

    To quickly review these principles, consider how we transfer data over the network today. When the internet appeared, its main goal was to make information available for everyone. Today, booking plane tickets and hotel rooms online, we want to protect most of the information we transfer, from our credit card information to other data such as the destination of a trip.

    Two approaches to implement SSL: STunnel and SecureBlackbox

    Use of the SSL/TLS protocol is enough to secure the data transferred over the network. Even if someone can get such data, the decryption will take too much time. There are many different ways to implement security of transferred data with SSL.

    The cheapest way is to use the STunnel application, which creates a secure channel between two computers. Such a communication channel is almost always transparent for the application that uses it but it requires fine-tuning and is not possible for all protocols. The main disadvantage of this mechanism is that an attacker can access unprotected data on the user's computer while the data is transferred between the application and STunnel.


    The cheapest way is to use the STunnel application, which creates a secure channel between two computers. Such a communication channel is almost always transparent for the application that uses it but it requires fine-tuning and is not possible for all protocols. The main disadvantage of this mechanism is that an attacker can access unprotected data on the user's computer while the data is transferred between the application and STunnel.


    Fig. 5. If the application exchanges unencrypted data, a third-party application can gain access to the data.


    It is necessary to say that STunnel is best when the client and/or server software cannot be changed. In other words, when you have only executable modules but not their source code. Although the attacker can get access to data on the user's computer such protection is better than no protection at all. STunnel can also be useful if you have integrated SSL support into a client-side application but cannot do this with server for some reason. Then STunnel can be installed on the server side. This case requires a check of the security of the server itself but in general can result in a secure system.

    Integration of the protocol into the application increases security. You must use an integrated solution in cases when the operational environment is not known or is insecure. If you are developing your application, you can use components that allow the integration of SSL directly into your application, for example, SecureBlackbox.

    Remember that you should use an SSL connection not only when the data is transferred over the internet but also when local networks are used too. If even one channel is insecure, then the attacker can use it to get the information needed or at least something that simplifies the decryption of information. So, if your system transfers important data over the network or at least data that can help attacks in some way, you must use a secure connection. This will help you to protect data from both unauthorized access and modifications. Always remember the rule "any system is as weak as its weakest part."

    The attacker can try to access data not only during the transfer, but also when the data is on some medium such as a hard disk, streamer, and so on. An attacker can get access to data both on the client side and server side.

    Server-side threats

    We cannot trust server protection though the operating system developers release patches when security problems are discovered. This doesn't always save the situation and can even make it worse sometimes. Thus additional protection mechanisms are used besides the OS' built-in facilities for server-side data protection. While carefully keeping a system up to date is up to the administrator, we can examine database protection in more detail.

    A database can be protected in two ways. The first way is to deny access using the database server. In this case, the database server checks all passwords and access rights. A disadvantage of this scheme is that if an attacker gets access to the server he will get access to the database. If you use such a database server, the attack vectors include not only database modifications but also copying the data files. For example, create a database at one computer and protect it with a password using the database server. Then create a database with the same name on another computer and protect it in the same way but use another password. Then copy the first database to the second location. Now you can access the first database using the password you've set for the second one. This happens because access control information is stored not in the database, but in the database server configuration.

    Another way to protect data is encryption. Some servers have built-in encryption capabilities and there are even special SQL commands for these purposes. But, it must be noted that encryption slows down performance and has certain specifics.

    Close attention should be paid to the security of software installed on the client side. As mentioned before, the user can have minimal or no knowledge of computer operations. The user might use a computer infected with a Trojan application for a long time and never notice. So, when developing a client application, you must be ready for the situation when the client computer is controlled by third parties — if your application stores some data that might turn out to be important, such data should be encrypted.

    User authentication is the keystone to security and it must be foolproof. The use of a user's ID or account as a password or the use of short passwords is unacceptable from a security point of view. If the authentication system is badly designed, it would be no problem for an attacker to find the password quickly. Weak authentication can nullify all security achieved with the help of cryptography. It's recommended to choose passwords longer than eight characters and to use both numbers and letters. Of course, it is not easy to remember such passwords, especially if it has no meaning. But, this problem can be solved easily with the help of an external password management application.

    For example, now it has become popular to keep passwords on USB drives and flash cards. You can place a certificate or other useful information near the password list. You should know that there are special smart cards and USB dongles for keeping X.509 certificates. Such devices can increase security but only certificates can be stored there, so they cannot be used as password keepers. Keeping passwords on an external card has several advantages, as you can carry your passwords with you and in case of danger the medium can be relatively easily destroyed. You can use a different password for each application or system you use. You can easily use long passwords that are hard for an attacker to guess or find using a brute-force attack. When you can say that only the person who has the device with the passwords can access the system, the computer has protection not only from an outside attacker but also if someone tries to use it during the owner's absence.

    The multi-tier application architecture itself allows for the creation of one more barrier for protection from unauthorized access: You can restrict user access depending on the user's tasks. Building on this, you can develop client modules in a way that operations performed by people with limited access are limited right in the application. For example, there are bank branches where the set of operations performed by clerks is limited to one or two operations. In this case, only the client module should be able to perform these operations. At the same time, the manager of the branch can use an advanced version of the application that allows the alteration of the database. Thus, security can be further increased by application segmentation according to the tasks performed.

    Analyzing Weaknesses in System Security

    You can use a simple scheme to analyze potential gaps in your system security towards creating a more secure application:

    • Analyze the security of data storage and data transfer channels;
    • Check when the data is not encrypted;
    • If the data is not encrypted, check if it is freely accessible;
    • If the data is encrypted, check if the attacker can obtain something usable to recover the encryption keys.

We appreciate your feedback.  If you have any questions, comments, or suggestions about this article please contact our support team at kb@nsoftware.com.