Articles

What is digital ethics?

13 May 2024

As charities move to digital in many of their activities, we examine the importance of a code of digital ethics to govern their behaviour.

Charities of all types are becoming increasingly data-driven and expanding their digital activities to encompass fundraising, service delivery, marketing and communications, and accessibility.

 

As these digital activities become more prominent, so too does the obligation to consider and adhere to some standard of digital ethics.

 

 

What is digital ethics and privacy?

 

There are many formal definitions of digital ethics – Wikipedia, for example, defines digital ethics (also called information ethics) as "the branch of ethics that focuses on the relationship between the creation, organisation, dissemination, and use of information, and the ethical standards and moral codes governing human conduct in society".

 

The purpose of this digital ethics code is to lay out the principals of conduct that should be practised by charities in digital activities such as increasing their reach using social media and using donor data to inform fundraising campaigns.

 

In these examples, digital ethics would influence how donor data is stored, used, and shared, and how social media should be employed to provide news and information, as both an advertising medium and a source of contacts.

 

The bottom line is that digital ethics is about moral values: it is concerned not so much with what a charity can do in the digital sphere, but what it ought to do. Or perhaps, more importantly, it is not so much about what it can’t do, but what it ought not to do.

 

This distinction is important, because what organisations can do with digital data – as opposed to what they ought to do – is already governed and restricted by rules and regulations such as the General Data Protection  Regulation (GDPR). This places a number of obligations on organisations that collect and use digital data about people, including:

  • Asking for permission to collect and store data about users
  • Asking for permission to sell any personal data that has been stored
  • Giving users the right to request that data about them is deleted
  • Giving users access to personal data that has been collected and stored

The implication of this is that charities can collect and store personal data about their service users and donors if they have permission to do so, and they can sell it on to whomever they like as long as they have permission to do so.

 

But any entry-level digital ethics course for students will tell you that even if you are permitted to do something (such as store a user’s data), it may be the case that doing so would be unethical. Collecting and storing personal data about a service user because you think it will enable you to provide a better service to them in the future is one thing.

 

Collecting and storing it without any clear purpose, despite the risk that the data could be breached and leaked by a hacker, is quite another. In this respect, digital ethics both guides and is dictated by what a charity feels it ought to be involved in.

 

 

What are your digital expectations?

Charities need to ask themselves some questions. Do service users who come to a charity expect their personal data to be sold to an advertiser even if they give permission for the charity to do so? Ought charities to raise funds by selling personal data supplied to them by people seeking help? Does the good that the funds generated by selling service users’ data outweigh any reservations a charity might have? What happens if the data sold to an advertiser is subsequently stolen and leaked by a hacker?

 

These are all questions of digital ethics that charities need to consider.

 

This also highlights the relationship between digital ethics and security. Aside from the obligation imposed by regulations like GDPR to store personal data securely, any digital ethics book will emphasise that data security is an ethical obligation as well.

 

A charity’s digital ethics should dictate that it ought to keep its data secure: failure to do so would be a huge breach of the trust placed in it by its constituents: its partners, supporters, and particularly its service users who may be vulnerable and who rely on the charity to look after their interests.

 


Why is digital ethics important?

Trust is a concept that is key to the charity sector because without trust charities cannot exist and thrive. That means that every charity has to have its own clear ethical standards which govern what it will and will not do, who it will and will not deal with, how it will and will not treat its service users, suppliers and partners, and so on.

 

And, crucially, the charity has to uphold its ethical standards in order to earn the trust and support it needs to carry out its charitable activities.

 

One of the key areas in which digital ethics is likely to be vitally important in the future is in the field of artificial intelligence (AI). This is an emerging technology and one which is unlikely to fulfil its potential for good unless it gains acceptance through trust.

 

The problem is that many people instinctively do not trust AI because they fear how it could be used. In particular, they fear how AI could breach privacy rights and enable organisations to know more than they are comfortable with about their habits, preferences, tastes, beliefs, and so on.

 

One way to foster trust in AI is to introduce rules governing its usage – and indeed organisations such as the European Commission have put forward a series of recommendations about the use of AI aimed at ensuring that business interests do not take precedence over the wellbeing of the general public.

 

But organisations can go a long way towards engendering trust in AI by adopting a code of digital ethics in the workplace which makes it clear what they do and do not regard as acceptable uses of AI. The European Commission’s Andrus Ansip stated this emphatically: "The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies."

 

 

What frameworks exist for digital ethics?

 

For any organisation, including a charity, building a code of digital ethics can be a daunting prospect. That’s because it can be hard to recognise which decisions about the use of digital technologies are – or should be – ethical ones.

 

That’s where a simple digital ethics framework can help. In general terms, organisations can benefit from the following four principals of digital ethics:

 

Respect trust

 

As an organisation that collects and stores data about customers, service users, or anyone else, it makes sense to consider whether any of your activities using this data betray the trust that they have put in your organisations. Put another way, are you using someone’s data in a way that they would not want you to, or not expect you to, or that is not in their best interest?  If that’s the case then digital ethics dictates that you ought not to, even though you could.

 

There is an issue of transparency here as well: people can only give informed consent for the use of their data if they fully understand all the uses to which it may be put. If you are not transparent enough about your intentions then you may have permission to use data in ways that they would never have accepted had they known. This is clearly unethical.

 

Look after digital data

 

Data security may be a regulatory requirement, but it is also a matter of digital ethics as well. That’s because any personal data is private, and you can only respect that privacy if you ensure that you look after it in an appropriate manner.

 

That also implies that data integrity is maintained: steps should be taken – such as robust governance and audit procedures – to ensure that it cannot be altered or deleted as well as stolen by hackers. That’s because if data is altered and then sold or processed it can be very difficult to undo any actions undertaken, and harm to an individual that may arise, as a result.

 

Watch out for unintentional unethical behaviour

 

Accurate data is just that: accurate data. But problems with accurate data can occur when it is misused unintentionally by humans.

 

A common example of this is confirmation bias, where data is used to confirm an existing belief or expectation, or to discount information that contradicts one. This type of "data cherry-picking" can lead to data bias, and in complex systems, it can lead to algorithmic bias.

 

Algorithmic bias could lead to a situation where a charity fails to assist certain groups of people, perhaps based on age, sex, or race, because a computer algorithm is biased and decides, wrongly, that they do not need or deserve such assistance. This is clearly unethical and is an example of behaviour which is unintentionally digitally unethical.

 

 

Foster a culture of digital ethics

 

No framework can possibly be complete, so it is important for employees in any organisation to examine the digital ethics dimension in any digital project they undertake, or in any analysis of data that they carry out. More importantly, the conclusions of this ethical examination should guide their behaviour: anything that breaches the organisation’s code of digital ethics must be modified or abandoned.

What is digital business ethics in simple words?

 

Business ethics concerns itself with how business organisations should behave, particularly when they face controversial situations. By taking up a clear ethical position it engenders trust with customers, business partners, and suppliers. There are various aspects of business ethics such as trustworthiness, respect, fairness, and caring.

 

Digital business ethics, sometimes called digital ethics in the workplace, is related to and is a subset of business ethics. But one major difference between business ethics and digital business ethics is where responsibility for its oversight falls.

 

Business ethics is the realm of a company chairman, chief executive, or the whole board of directors. But when it comes to digital business ethics, oversight is usually provided by someone more intimately connected to the digital technologies being used in the organisation, such as the head of IT or, in larger organisations, a chief information security officer.

 

In very large organisations there may even be a chief digital business ethics officer or head of digital business ethics whose job it is to ensure that any ethical considerations are examined at the highest levels of the organisation and receive appropriate attention.

 

 

What do you mean by digital journalism ethics?

 

Just as digital ethics is concerned with how organisations ought to behave when it comes to digital technologies, journalism ethics is concerned with how journalists ought to behave while carrying out their journalism duties. Typical elements of journalism ethics include truthfulness, accuracy, objectivity, impartiality, fairness, and public accountability.

 

Digital journalism ethics is the combination of the two, looking at the responsibilities anyone publishing material online ought to consider. This includes fact-checking any content that is published online and making strenuous efforts to avoid propagating incorrect information or "fake news" inadvertently.

 

This is particularly relevant for charities because building trust in their digital content is crucial if they are to effectively communicate the importance of their work, and avoid readers doubting the veracity of their messages and simply shrugging them off.

)
Sign Up

Sign in to continue reading

Access all our articles and search the provider directory for free.