Who Should Read It?
This article is for web content makers and owners of the public content platforms, web developers, and anyone who can suddenly publish content that might become a subject of DMCA claim. A couple of examples are Twitter, GitHub, and Vimeo platforms that allow users to publish pictures, videos, and source codes that might appear to violate copyright laws.
Disclaimer
Of course, when we are talking about Public resources like Twitter, it is not a problem for someone to write a web-crawler smart enough to analyze specific resources and copy/download all the possible content from it (or save the content on the user machine). In this case, your platform/web-site is simply one point in the content distribution chain and does not know how this content is supposed to be shared after all. Since it is a separate resource with its mission and reasons to work with this information (so they might need or don't need to satisfy DMCA rules), I don't think it's something you can do with it. Web-crawlers overall may become a massive problem in the DMCA applicability. On the other side, they play a substantial role as an external cache that allows people to find information lost on the original resource. So, web crawlers, not always bad, actually.
from DZone.com Feed https://ift.tt/2NYN3cB
No comments:
Post a Comment