Monday, October 17, 2016

EU GDPR will be there but how to start the journey - Chapter 2

How to understand what data our employees are sending and where those are used.

Last time we looked the life from understanding what is happening inside our network, let's extend our mindset also to understand how to protect the information moved inside and outside our network and how to understand and make visible, where the file is opened. Cool - is in't?

One additional thing bringing more complexity is of course the hybrid setup. Our planet is not so black and white instead it have some shade of other colors between black and white :-).

So the identity is not black white, the data location is not black and white but the IT still lives in black and white to manage same resources with less money - you got it.

So if we start from basic we need to understand the data and classify it, witch makes this big change management and communication issue from the end user view. They have been familiar to save data where ever they feel comfortable or has been used to - even that there might been some guides and policies to store the data here and there without able to reuse what colleagues has created - mine is always the best and that's why I started from scratch or use only copies what I have created.

Back to classification  - in very pragmatic view the data classification can be defined to couple class:
  • Secret
  • Confidental
  • Internal
  • Not restricted / Public
  • and Personal witch makes this even funnier based on EU GDPR - nice word again.
 Sounds clear and should be easy after we have configured the new classification to our organization and when people are starting to create new document those will be classified but what about the 345 Billions old, legacy file we have like Summerparty2001 pictures and invitation and food list. In this time youngster usually says OMG - still saving so old data - you are so old school. True unfortunately - organization has migrated data transition after transition after transition from NT 3,51 or maybe from OS/2 or WARP to Windows NT 4 to Windows 2000 to Windows 2003 to Windows 2008 to Windows 2012 R2 file servers and now thinking to migrate the data to Windows Server 2016 R2 and so on. And every transition we purchase more storage, build and configure more sophisticated storage solution with maybe dedublication to save the storage but still not touching the root cause.  Let's avoid opening the backup discussion here - sorry we can't. We backup the local branch office serves offering local network share to the users for data parallel to be the first place for user desktop backups - ups, same file in X:\data\path\salespresentation.pptx Drive as file and in in X:\Backups\GasMonkey\backup22022002.something and so on. While we don't have back up solution and tape's in branch office we some how copy the data to central data center where the both files and backup files are copied to tape and archive it. Simple, nice and easy - well no.

Let's take one variable here and call it human, you know the person who talk and walk do all kind of funny things. So it saves the file created in it's PC to local drive and copy it to the local network drive parallel to send it in email to 20 best friends who might need that file or maybe not and each of these best friends save the file to their local PC and maybe even in the local network drive in their office witch is then backed up to the central data center in that region not forgetting the automated backup scripts copying the file to local network share, from where other scripts copy the file to data center were it will be backed up to tape witch maybe never ever has been really tested from bottom up.


And suddently the file is 2, 3 5, 10 or 45 times stored and using the storage capacity with value of 0 when we looked the name of the file - salesguide_2005draft.doc - frankly for this does not sound fun instead.....

And short conclusion is that technology is not limiting and root cause for this - it is the human and lack of policies and governance with data classification with retention/archiving period, detect and control and proactive communication and owned by business, lead by example with commitment.

Sounds familiar - be honest.

You got the point, we need to classify the data and we must have meta data witch triggers and is used in retention. Like start workflow to get approval to delete or save other 6 months to all files classified Internal/security and have Draft Meta Data attribute. This actually come back to the terms workflow - automate - process witch are not technical IT terminology only and we might ask from our self that are these features normal disk systems and file share give to us if your answer is yes - are those in use, if answer is no - only questions is why?

So classification is needed and it must be able to configure it automatically during document creation based on the data content like social security, bank account, credit card and so on but also allowing users to overdrive the automatic rule.

Check more data from Wikipedia using following link - if it just work.
meta data
or using following link to digital guardian  digital guardian data classification


Will continue next more from Meta Data in next article and as usually

"All ideas and thoughts are my own like pictures unless told the source"

To be Continued ..


Biker's meeting Haltiala / Finland August 2016 - approx 200 bikers ( mostly age over 40)

No comments:

Post a Comment