Data Management: Main Functions and Processes

Data Management Practices Summary

The webinar about data management basics provided by UK Data Service offers step-by-step instructions regarding the basic features and processes involved in data management. These practices are typically used in scientific research but can be applied to other practices of obtaining and analyzing personal data. The video covers the majority of concern regarding the purpose, ethics, usage, storage, security, and disposal of data, to complete a full data lifecycle.

First, the webinar covers data management as a new emerging paradigm in research and business. Data sharing is an important process, as it allows multiple interested parties access to a multitude of data, which could be used to further understand a certain process, make improvements, and control the directions of research (UK Data Service, 2017). As such, the employees need to understand these processes, the costs involved, and the training necessary to facilitate proper data management.

Most data management plans have a similar structure. The steps involved in data management include assessment of existent data, information on new data, quality assurance of data, backup and security of data, anonymizing and consent processes, difficulties in data sharing, copyright issues, responsibilities, and management of the newly-collected data (UK Data Service, 2017). The webinar discusses some of these points in greater detail.

A substantial amount of time is spent to cover the ethical side of data collection. Researchers have to work with very sensitive and personal data, which may include names, ages, addresses, diagnoses, personal thoughts and beliefs, and various other information that the respondents would want to protect. The first topic of interest was the informed consent procedure, which is usually obtained through a form. There are different kinds of forms, but the most popular ones are the universal and granular consent forms (UK Data Service, 2017). The former is used when each part of the data collection process is integral to the research, whereas the latter is used when several processes could be treated separately.

In most researches where consent needs to be obtained, a degree of anonymity is expected and promised. For quantitative data, which includes information such as name, address, age, locality, workplace, and others, generalizations are used. For qualitative data, summarization and pseudonyms are applied (UK Data Service, 2017). A researcher must walk the line between protecting the anonymity of the individuals involved in the research and keeping the data from becoming too incoherent and detached from the original file. Lastly, the researchers can protect data by restricting access to it and differentiating the levels of access.

Data compilation and organization is an important process that allows the users to sort through the information to find what they need. Compilations in research are typically made using key parts, such as methodology, questionnaires, transcripts, data lists, and links to relevant publications. Every part of the research must have a variable name to differentiate it from other segments of the paper and avoid confusion. At the documentation level, the researchers are recommended to use common long-term data saving formats, such as XML, RTF, PDF, and others, to store information (UK Data Service, 2017). The data itself should be kept in different folders and subfolders, differentiating based on importance, relevance, and access levels.

Data security strategies are obligatory for researchers, as they come into the possession of data that is of personal importance to the respondents and can be used against them by the unwitting or nefarious individuals. Standard measures of protection include the installation of passwords, antivirus programs, and firewalls to protect data from outside intrusion. Also, encryption software is recommended to further increase the resilience of data vaults to cyber-attacks (UK Data Service, 2017).

Lastly, data security deals with restoring data after an event of a critical malfunction or corruption. Some of the back-up solutions include the use of cloud servers, power banks, and physical copies of the material.

The last part of the webinar deals with the process of data destruction. After the results have been comprehensively analyzed, anonymized, and compiled, the raw data must be disposed of appropriately, to prevent it from being restored and used by third parties. Data destruction is the final part of the data lifecycle. At the same time, it is notoriously difficult to achieve, as most data is not fully destroyed, but rather being laid over with new information. Several programs claim to rewrite the memory so many times that data restoration is impossible (UK Data Service, 2017). Only the physical destruction of the data storage devices can guarantee the proper elimination of personal information used in the research.

Data Management and Strategic Planning

With the emergence of data management as a separate field of expertise, a data-driven strategic planning framework became a staple in the majority of modern businesses. The typical framework involves the following steps, which can be applied to almost any business process (Chang, 2016):

  • Determining the type of project and its focus;
  • Determining the key issues;
  • Assessing and defining data;
  • Data collection;
  • Performance analysis;
  • Organization and presentation.

Almost every strategic analysis tool, such as SWOT, PESTLE, and Porter’s Five Forces utilize quantitative and qualitative data that could be assembled through various means, either directly or indirectly (Chang, 2016).

Six Sigma and Total Quality Management (TQM) also require a solid information-driven foundation to improve the safety and quality of products or services (Chang, 2016). Data collection, analysis, and assessment are also required to determine and uphold industry standards. This is done through various means, which include the objective assessment of various products as well as customer perceptions of the provided services.

Data Management in Literature

The majority of academic literature dedicated to the subject of data management focuses on three key areas behind the concept. These areas are big data management and use in predictive analytics, inter-system data sharing, which enables a quicker and more comprehensible analysis to be used in decision-making, and cybersecurity. Big data is the modern buzzword, which gained popularity due to the promise of predicting the future needs of employees, customers, and business processes.

According to Schoenherr and Speier-Pero (2015), supply chain management can benefit from the utilization of big data to optimize their processes. The researchers state that the predictive capabilities of big data analysis systems provide above-average results and help improve the supply chain’s design and competitiveness (Schoenherr & Speier-Pero, 2015).

Nevertheless, the use and analysis of big data require sophisticated technologies and proper data collection. Janssen, der Voort, and Wahyudi (2017) state that the critical factors in the accuracy of big data analysis include the veracity, variety, and velocity of data amplified by the total size of the data pool. Therefore, although big data is useful for predictive analysis, the data itself must adhere to a variety of standards to be effective.

Data sharing between customers, suppliers, and managers have always been a topic of interest in the business. The modern globalized economy would not be possible without new technologies enabling companies to connect these three stakeholder groups, as it significantly increased the speed of the decision-making process. Drazen, Morrissey, Malina, Hamel, and Campion (2017) highlight the importance and complexities in data sharing, stating that the most prominent issues in the field are data standardization, data protection, and data quality. Cui, Liu, and Wang (2017) state that the optimal choice for large corporations in processing and facilitating data sharing in large quantities is the use of cloud storage with embedded encryption technology.

Information security became global news after 2015, which was known for a multitude of devastating hacking attacks, which ended up in the loss of numerous personal files, relinquished assets, and complete paralysis of the banking system of several countries. These trends have fallen into the pattern of hacker activism, which targeted government systems to expose their inner workings. Laybats and Tredinnick (2016) state that, in the majority of cases, the leaks and breaches were possible because of the neglect of the very basic cybersecurity protocols, such as timely updates, password discipline, firewall usage, and shutting off the computers after work.

Conclusions

There are several lessons to be learned from the webinar as well as from the literature reviewed in this paper. The primary lesson is that the standards of data collection are very useful in improving the quality of data, ensuring a faster and easier analysis, and maintaining the ethical standards associated with data handling. The second lesson is that the majority of issues with big data, information security, and data sharing come from a lack of training and outdated technologies rather than critical failures within those systems. It would be my duty as a future leader to enforce rigorous standards of data management to improve the safety, security, and predictive capabilities provided by data.

References

Chang, J. F. (2016). Business process management systems: Strategy and implementation. New York, NY: Auerbach Publications.

Cui, B., Lui, Z., & Wang, L. (2017). Key-Aggregate Searchable Encryption (KASE) for group data sharing via cloud storage. IEEE Transactions on Computers, 65(8), 2374-2385.

Drazen, J. M., Morrissey, S., Malina, D., Hamel, M. B., & Campion, E. (2016). The importance – And the complexities – Of data sharing. The New England Journal of Medicine, 375(12), 1182-1183.

Janssen, M., van der Voort, H., & Wahyudi, A. (2017). Factors influencing big data decision-making quality. Journal of Business Research, 70, 338-345.

Laybats, C., & Tredinnick, L. (2016). Information security. Business Information Review, 33(2), 76-80.

Schoenherr, T., & Speier-Pero, C. (2015). Data Science, predictive analytics, and big data in supply chain management: Current state and future potential. Journal of Business Logistics, 36(1), 120-132.

UK Data Service. (2017). Data management basics. Web.

Cite this paper

Select style

Reference

Premium Papers. (2024, January 22). Data Management: Main Functions and Processes. https://premium-papers.com/data-management-main-functions-and-processes/

Work Cited

"Data Management: Main Functions and Processes." Premium Papers, 22 Jan. 2024, premium-papers.com/data-management-main-functions-and-processes/.

References

Premium Papers. (2024) 'Data Management: Main Functions and Processes'. 22 January.

References

Premium Papers. 2024. "Data Management: Main Functions and Processes." January 22, 2024. https://premium-papers.com/data-management-main-functions-and-processes/.

1. Premium Papers. "Data Management: Main Functions and Processes." January 22, 2024. https://premium-papers.com/data-management-main-functions-and-processes/.


Bibliography


Premium Papers. "Data Management: Main Functions and Processes." January 22, 2024. https://premium-papers.com/data-management-main-functions-and-processes/.