In recent months there has been increased discussion around regulating content online. Pakistan recently enforced its rules on removal and blocking of unlawful online content, giving the state regulator broad powers to restrict online content. Recently, India brought online news portals and entertainment content providers, such as Netflix, under its purview, while Sri Lanka has announced plans to formulate laws modelled on Singapore’s rules governing online digital content.
In this episode of Himal podcasts, our senior assistant editor Raisa Wickrematunge interviews Dr. V Sridhar, Professor at the Centre for IT and Public Policy at the International Institute of Information Technology in Bangalore and author of Emerging ICT Policies and Regulations: Roadmap to Digital Economies on whether regulation of online content is necessary and if so, when and how it should be introduced.
HSA: In your book you seem to have taken the position that digital regulation is almost inevitable. Very often though the state regulates the digital realm not to fix the market but for authoritarian reasons and for the suppression of freedoms. In this context who do you think the regulator should be, given these realities?
VS: Telecom has been a regulated industry all over the world for quite some time. So, you would normally use command and control mechanisms to control, for example the telecom industry. The reason is that the telecom industry, the telecom marketplace, is not a competitive market. It is an imperfect market with about three to four, a maximum of five telecom operators providing service and therefore it is an oligopoly, and all the imperfections of an oligopoly will be present in the telecom market, hence the need for a regulator. Now if you move towards the digital space, internet companies, digital platforms, over the top players and so on, we have not seen active regulation all over the world for quite some time until recently.
The reason is that this is a new area of technology. There are innovative services that are being provided by the various service providers. Therefore, we should encourage innovation. Now only after it reaches a certain point where the imperfections of the market start exhibiting that the regulators step in. The economic view towards regulators stepping in is through [looking at] Facebook, Cambridge Analytica deal, or for example, Snowden’s revelation. So, these things have prompted the regulators to put in some rules so that consumer benefits are not affected. So, I would say that the regulators have a role to play in the digital space but not so much as the telecom regulation in the traditional sense. In the book also I advocate, for example, self-regulation, co-regulation or the methods by which the digital space could be regulated and not in a very strict command-and-control mechanism by the state.
HSA: Your book frames regulation as primarily a necessity in the context of imperfect markets as you mentioned: so lack of competition, high barriers to entry and so on. Do you think a purely market view is sufficient to understand digital regulations?
VS: That’s a good question, Raisa. You know it is still evolving, right? Suppose if there has to be a yardstick to find out when the particular industry has to be regulated, or it should not be regulated. The yardstick that we normally use are the generic principles of imperfections of a market. So, that is from the economist’s point of view, there can be different views from the other areas. For example, if we look from the societal point of view, maybe the economic point of view of regulation may not be perfect, but at least, telecom regulation as of now always looks at some imperfections in the market.
The internet space, for example Google or Amazon, they’ve become monopolies in their domains and therefore there is a reason for the regulator to worry about it, as it has turned out to be an imperfect market. So, the traditional view of regulators is to look at imperfectness in the market and then take appropriate actions, yes imperfectness is present. Now it is possible that it may not be the ideal way to look at it. Like for example, from the societal point of view, even if there is a perfect market, for example, in news, we have so many OTTs [Over The Top media services] which are providing news. But still there can be imperfections because there can be fake news, there can be propagated news, and it may be of reducing value for the society at large and therefore the regulators might have to come in and then take some necessary actions.
HSA: So, I wanted to touch on events that have recently made the news which is India’s move to ban China based apps including the very popular app TikTok and this has had a negative impact on the emerging digital content creator market. What is your view of this move?
VS: As I discuss in one chapter, the privacy chapter a little bit, because I have done a lot of work on this later on, after I wrote the book. This is called data nationalism. So, once we thought that free borders, open countries – it will have a positive impact on trade, especially digital trade. But as of now, most of the countries seem to be creating policies which protect their national interests. So national sovereignty has taken precedence in most of the cases. So, data localisation rules have come in, most of the countries have even in the EU GDPR, there are rules which want to confine the data within the limits of the EU as much as possible. So, most of the countries have started enacting these rules which are very nationalistic in nature.
Blanket regulation such as store all data locally, ban all the apps – these are not the way in which the policies and regulations should be pronounced.
Now, see these national policies, data protection regulation which prevent certain apps to be used, which prevent data being transferred across borders, which want all the data to be stored within the country by even global internet services providers. That is not good. So, there again, if we say that all digital data about the citizens of the country has to be stored within the country, if we make a rule like that, that is not going to fly, because see on one hand there are a lot of economies of scale storing digital data in the cloud. So, the cloud may be anywhere, it improves reliability, it improves security, so it is in the interest of the data fiduciaries to store data in a disparate location so that the availability of the data is as high as possible.
But nations might say that you know this particular data is, for example, if you take India’s draft data protection rule, it says that there is a set of data, which is very particular, very sensitive and therefore it needs to be stored within the country for national security purposes or preventing fraud. Now that is possible, but we need to clearly subset the data that needs data localisation for a purpose. So, it is very important for the state to define the purpose behind data localisation. What is the purpose? If for example either security, surveillance is the purpose then only that data which enables surveillance should be stored. Now ideally that should not be the case. In fact, bilateral agreements and global multilateral agreements have to take place so that even if the data is elsewhere, we should be in a position to access it. The state should be in a position to access it given the purpose. So, I find that in most of the data localisation, the purpose is very ambiguous. The purpose is very generic, so we need careful deliberation on what is the data that needs to be stored locally and for which purpose so that it can be clearly articulated in the policies and also can be enforced.
HSA: And do you think this is something that the state should be doing given the concerns we were talking about earlier as well where there has been this tendency to use it to suppress freedoms?
VS: Yes, that’s definitely right. For example, the internet shutdowns which have been happening all over the world especially in countries like India – that is not good because, as you know, I am teaching courses online, and there are students who have to access my course, real time, synchronised when I’m giving a talk. But unfortunately, because of the lack of 3G or 4G, because of internet data not being available, because of internet shutdowns, they are not able to access my course. Who is benefiting? Nobody is benefitting. Actually, these students are suffering because of that. So, blanket shutdowns like this do not serve any purpose.
The inclusion of government as one of the important data fiduciaries is very important in the success of any data protection rules and regulations.
Similarly, I am saying that any regulation, blanket regulation such as store all data locally, ban all the apps – these are not the way in which the policies and regulations should be pronounced. There should be a very articulated purpose and there should be a time, a timeliness involved. For example, internet shutdowns in some of the states in India have been going on for a long time. I’m against that because has the purpose been satisfied? Then why not open it, so there has to be a time limitation on any aspects that we are dealing with in respect to stringent regulations, internet shutdowns, data localisations and so on. So, I’m saying that the states have to be conscious about it, who is benefiting from all these things. Okay they want a shut down because of a certain purpose. After the purpose is achieved, they should open it up.
HSA: I also wanted to talk a little bit about some of the other topics you’ve covered in your book. In light of, for example, recently in Southasia, internet service providers have been tying certain digital products with their data offers and this is specially happening you know in this age of where there is a lot of remote work and study due to the pandemic. For example, special packages for Microsoft Teams, for even for things like Netflix and so on. In light of this, what do you think this means for net neutrality?
VS: So, as you know there is pure net neutrality, which means that the content and the users should not be differentiated, should not be discriminated [against]. Pure net neutrality with respect to price, profit and users – there should not be any discrimination. Then there is pure non-net neutrality, where you can differentiate, you can differentiate with respect to price, you can also differentiate with respect to the traffic so that the priorities can be provided for Netflix for example or the users. For example, Netflix users are given more speed and things like that. So, there is absolute discrimination. Now most of the countries, we don’t fall into pure net neutrality or pure non-net neutrality, but we fall sometimes in between. For example, there can be a zero-rating package. So, if you subscribe to Netflix plus a telco’s 4G contract service then you might get it for free, but if you access any package other than what is bundled in that particular offer, you might be charged. That is price discrimination across different content. The content which is bundled is given free of bandwidth cost whereas the other one is priced at a positive price and therefore the users will tend to consume more of the zero-priced offering, for example, the content that is bundled with the service compared to the others and therefore the others will die. That’s the concept of net neutrality. So, I am not totally against certain kinds of non-net neutral behaviour. So, for example if the content is bundled with a service provisioning as long as the user has the option for choosing some other content if he/she is interested and if it is allowed then it is okay. We can tolerate [that], especially in developing countries. For example, take Africa, most of the Free Basic services of Facebook are available in most of the African countries. So, they have taken a stance saying that with Free Basics I am getting all this Facebook content, free with the service contract and therefore it is going to benefit the masses at large and therefore we should allow it. India has taken an absolutely net neutral stand, a pure net neutrality stand. India it is not allowed. US has gone back and forth. So, these things do happen. The countries have to be very clear when they are making regulations. If for example we want small and innovative entrepreneurs to compete with giants such as Facebook and Google then we should not allow bundling. Bundling of the application with for example the service provider at zero price, so if it is a positive price the other application is also available at a slightly higher price, then it is okay. It’s not a complete violation of net neutrality. So, the rules and regulations have to be formed taking into consideration the ecosystem that we want to nurture. We want to nurture innovation and we want to nurture local startups competing with global startups. Then we might want to move more towards net neutrality compared to for example the non-net neutrality point of view. So that is the trade off or the balancing walk that the regulators have to do.
HSA: What would your advice be to Southasian digital-rights activists as they get ready to face a string of stringent state regulations that could negatively impact freedom of expression?
VS: Right, so definitely see all these aspects. For example, data localisation or the state’s rules on curbing freedom of speech on the digital space, it has been bothering all the data subjects at large, so, as you know, in Egypt when the uprising was there. So, there was an intervention done, where the people within Egypt were not able to communicate [with the] outside [world], so the same thing may be happening in China because of the Chinese firewall that they have created. So, the state has to allow freedom of expression in digital space because of the ease with which you can communicate, reach out to larger masses, should not be stifled. But at the same time, the government should carefully note, for example, the speech or the content if it creates some social unrest then appropriate actions may be taken. For example, there is a lot of talk about intermediaries, so Facebook and WhatsApp and so on, for example, are they liable for the information that is carried, should they be controlled so that they will be able to proactively monitor the content and then take appropriate actions if it is found to be negative for the society at large?
There has to be a time limitation on any aspects that we are dealing with in respect to stringent regulations, internet shutdowns, data localisations and so on.
So, I’m against proactive monitoring, but their intermediaries should be liable, suppose for example, if there is hate speech and the intermediary comes to know about hate speech. Then they should take some action in order to filter out hate speech, notify the users that such a thing is happening. Those are some things that intermediaries have to take action on. I don’t know about the other Southasian countries, but in India, for example there is absolute immunity to intermediaries. So, all intermediaries are immune for the content that is going on through the network. So, they have to be made liable, they have to be made responsible because they affect society at large and therefore there has to be some amendments to the way the intermediaries have to function so that the negative aspects of free speech. For example, hate speech or misinformation and things like that has to be controlled to some extent.
HSA: You also said that the state should take appropriate action given their instances where there’s social unrest. Are you confident that the state – the Indian state in this instance – will be able to kind of define what action is appropriate?
VS: See, definitely Raisa, most of the Southasian nations, including the large countries such as India, do not have enough capacity in the government to make appropriate regulations, rules in the perfect way; it is quite possible that we might actually take rules and regulations from other countries. For example, most of the other countries have adopted EU GDPR because that has become a golden standard. But when we try to enforce it, it becomes a problem. One of the things we noted in most of the countries is that the state is not liable for their actions. Individual data fiduciaries are responsible but what about the state if it makes a mistake? They should be made liable. So, that is totally missing in most of the developing countries, because it is all state driven. So, the state can ask given certain purposes but if it is misused who is liable? The state has to be made liable right, and responsible. That’s one of the things we in India are requesting to be included in the data protection bill because there are a lot of instances in which the government or the state has gone wrong. They’ve intruded on privacy, they have intruded on certain aspects of free speech and if they are held liable so today, the only recourse that we can take is the judiciary. So apart from the judiciary there is no way by which the government can be made accountable if they take wrong actions. But it has to be embedded. I think to some extent the EU GDPR has succeeded in embedding the government as one of the data fiduciaries and being made responsible and liable for any misappropriation and in this context. And that has to be included in the regulation of most of the countries. Then we don’t have to really go to the judiciary for each and every event right, we can possibly solve it using the existing regulations.
HSA: That leads to my next question actually which was on data protection legislation – which is a topic of discussion in Sri Lanka because we are also contemplating passing legislation on data protection and India did produce this data protection bill in parliament in 2019. Broadly can legislation on data protection always be a panacea to privacy related issues?
VS: That’s a good question Raisa. So, laws and legal rules and regulations are there, so that the stakeholders of the ecosystem conduct their behaviour properly. So, we assume that if there are regulations then they will comply to that. But that’s not the case. Even if you take EU GDPR, there are so many violations. In fact, I teach a course on privacy and they have found that the famous cookie law in the EU has been violated left and right by all the data fiduciaries. Now, you can ask this question given the laws and regulations why are the data fiduciaries violating this and are they not being caught? I mean if they are caught what will the liability be for that? Now European Union to some extent has succeeded, so they have put a very high penalty for violation of privacy breaches. If they don’t notify privacy breaches it’s going to be like 10 million dollars and two percent of the annual gross revenue, global gross revenue. And if for example they don’t, data fiduciaries do not take due diligence security and protection of privacy of the individuals then it is much more, it is four percent of the annual gross revenue of the data fiduciary and so on. So that is a high penalty if you violate. On the other hand, most of the other countries do not have. For example, even the Personal Data Protection bill, we have copied to some extent the EU GDPR but still you know whether someone will be caught if they violate the rules and regulations, and if so will the penalty be applicable? Will the data fiduciary which has violated pay that particular penalty or get away with it? These are all big question marks. In most of the developed countries that was the problem. Law is written, rules are done, regulation is there, but the enforcement of the rules and regulations is very weak. That is one thing that I want to stress upon with respect to data protection. We might come up with, you know Sri Lanka can come up with excellent data protection regulation but is it going to be enforced, is it going to be, if for example somebody is found violating is it going to be penalised, is the fiduciary going to be penalised, by how much and so on. So, data fiduciaries as you know, they want to, they will look at the cost and benefits. Cost of compliance versus the benefits of not complying. So, if the cost of compliance is lower compared to for example the benefits of breaching the law, then they will do that, and that has been the way in which for example the Facebook in which the Cambridge Analytica deal came up right, so the benefits seem to have been larger and therefore they breach that particular rule. So, what I’m saying is that these data protection regulations provide the grounds, the reference range and it is for the data fiduciaries and all the other stakeholders to comply with that but if they don’t comply with that the state has to take stringent actions, enforce it and that is the way in which this can move on. So, to answer your first question, is data protection regulation a panacea? No. It provides a ground rule and the question is whether we can enforce whatever is present in the data protection regulation. Not only for private data fiduciaries but also for government data fiduciaries, is a question that needs to be answered by most of the developed countries.
HSA: I think you did already mention in passing that you felt that India’s data protection legislation was imperfect but just wanted to ask if you wanted to add anything to what you mentioned earlier?
VS: Yes, so a couple of things. For example, it mentions that critical personal data has to be stored within India and India only. That is one of the clauses for data localisation. Now it is left to the state to define what critical personal data is and therefore it is an open question, so you can come up with anything and everything under the sun later on. The second one is the definition of personally sensitive data which has to have at least one copy within India, where a copy has to be there within the jurisdiction of the country. Now, again it is very loose, passwords things like that have been included in this personal sensitive data, so these definitions are a little bit loose, but nevertheless it is moving in the right direction because it provides the ground rules for data fiduciaries to do data protection when they collect, process personal information. So, it’s a good start but as I’ve told you before, the government fiduciaries are not included in the for example the set of entities that has to comply with all the rules and regulations on the data protection bill, on the Data Protection Act. So, inclusion of government as one of the important data fiduciaries is very important in the success of any data protection rules and regulations because otherwise as you have correctly stated, state can actually because of the power asymmetry between data government and the citizens it is possible that the government can actually do things which are beyond rules and regulations in the policies I’ve stated.
HSA: You also speak of artificial intelligence as a double-edged sword in your book. Why do you say that and what do you think can be done to mitigate the negative impacts including for example on the labour market?
VS: So, as you have seen the machine intelligence has taken over, it has come into the market. A lot of routine tasks are being automated today because of the computational power and the intelligence of the algorithms and the availability of large amounts of data, so that even non routine tasks, cognitive tasks are also getting automated. So, it will definitely have an effect on the labour market. For example, already we are seeing that some of the routine jobs are being taken over by machines, for example even in the IT industry, if you take IT industry it is supposed to be a highly cognitive task oriented industry but the testing jobs have gone, right, because most of the software testing is being done today by robots and algorithms, and therefore it will have an effect on the labour market, but the important thing I think Professor Daron Acemoglu of MIT has done an excellent analysis on the impact of the labour market, so if for example if there is going to be an impact whether it will be a routine, labour-oriented tasks or the non-routine cognitive tasks, the machines are going to take over then the problem, the challenge for the government is to do skilling appropriately and then define where the machines should go and where the machines should not go. So, for example,if labour intensive work is very important for a particular country then they should encourage AI and Machine Intelligence to be used more for cognitive, non-routine tasks, something like that should come into the picture. But in general, the impact of artificial intelligence or machine intelligence on the labour market is a question of reskilling. So, we need to really reskill our labour force because we cannot stop technology from progressing. But we cannot artificially stop, if it comes to a stop on its own then it’s fine. So, any company will either recruit a machine to do a particular job or a labour to do a particular job based upon the productivity, marginal productivity to wage. So, if for example the marginal productivity is higher compared to the wage then they will deploy for example labour. If it is cheaper and more productive to deploy machines then they will deploy machines, it’s all economics. In order for us to carve out a space for ourselves we need to be very clear as to reskill. If humans become better than machines then they will never be replaced. Both in terms of wages that we pay and also based upon the productivity so if the labour becomes more productive at relatively lesser wages then they will not be replaced, so I would really advise, as is given in the book, to encourage the positive aspects, then mitigate the negative aspects, the negative aspects can be mitigated using regulation, for example trustworthiness, whenever I employ Machine Intelligence or a robotic system has to be trustworthy, new guidelines have come out with respect to machine autonomous regulation and we should take some clue in order to incorporate this in most of the countries.
HSA: Thank you so much for joining us today.
VS: Thank you very much.