Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Proxy server
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Content-control software==== {{Further|Content-control software}} A [[content filtering|content-filtering]] web proxy server provides administrative control over the content that may be relayed in one or both directions through the proxy. It is commonly used in both commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms to [[acceptable use policy]]. Content filtering proxy servers will often support [[Authentication|user authentication]] to control web access. It also usually produces [[server log|logs]], either to give detailed information about the URLs accessed by specific users or to monitor [[Bandwidth (computing)|bandwidth]] usage statistics. It may also communicate to [[daemon (computing)|daemon]]-based or [[Internet Content Adaptation Protocol|ICAP]]-based antivirus software to provide security against viruses and other [[malware]] by scanning incoming content in real-time before it enters the network. Many workplaces, schools, and colleges restrict web sites and online services that are accessible and available in their buildings. Governments also censor undesirable content. This is done either with a specialized proxy, called a content filter (both commercial and free products are available), or by using a cache-extension protocol such as ICAP, that allows plug-in extensions to an open caching architecture. Websites commonly used by students to circumvent filters and access blocked content often include a proxy, from which the user can then access the websites that the filter is trying to block. Requests may be filtered by several methods, such as a [[Blacklist (Computing)|URL]] or [[DNSBL|DNS blacklists]], URL regex filtering, [[MIME]] filtering, or content keyword filtering. Blacklists are often provided and maintained by web-filtering companies, often grouped into categories (pornography, gambling, shopping, social networks, etc.). The proxy then fetches the content, assuming the requested URL is acceptable. At this point, a dynamic filter may be applied on the return path. For example, [[JPEG]] files could be blocked based on fleshtone matches, or language filters could dynamically detect unwanted language. If the content is rejected then an HTTP fetch error may be returned to the requester. Most web filtering companies use an internet-wide crawling robot that assesses the likelihood that content is a certain type. Manual labor is used to correct the resultant database based on complaints or known flaws in the content-matching algorithms.<ref>{{Cite journal |last1=Suchacka |first1=Grażyna |last2=Iwański |first2=Jacek |date=2020-06-07 |title=Identifying legitimate Web users and bots with different traffic profiles — an Information Bottleneck approach |journal=Knowledge-Based Systems |language=en |volume=197 |pages=105875 |doi=10.1016/j.knosys.2020.105875 |s2cid=216514793 |issn=0950-7051|doi-access=free }}</ref> Some proxies scan outbound content, e.g., for data loss prevention; or scan content for malicious software.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)