Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Operational definition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== In computing === {{Cleanup rewrite|section|date=January 2022}} {{tone|section|date=January 2022}} Science uses computing. Computing uses science. We have seen the development of computer science. There are not many who can bridge all three of these. One effect is that, when results are obtained using a computer, the results can be impossible to replicate if the code is poorly documented, contains errors, or if parts are omitted entirely.<ref>Collberg, C., Roebsting, T. (2016) "Repeatability in Computer Systems Research" Communications of the ACM, Vol. 59, No. 3, pages 62β69 (via [https://cacm.acm.org/magazines/2016/3/198873-repeatability-in-computer-systems-research/abstract acm.org])</ref> Many times, issues are related to persistence and clarity of use of variables, functions, and so forth. Also, systems dependence is an issue. In brief, length (as a standard) has matter as its definitional basis. What pray tell can be used when standards are to be computationally framed? Hence, operational definition can be used within the realm of the interactions of humans with advanced computational systems. In this sense, one area of discourse deals with computational thinking in, and with how it might influence, the sciences.<ref>"Computational Thinking in Science" American Scientist, Jan/Feb 2017 (via [https://www.americanscientist.org/issues/pub/2017/1/computational-thinking-in-science My American Scientist])</ref> To quote the American Scientist: * The computer revolution has profoundly affected how we think about science, experimentation, and research. One referenced project pulled together fluid experts, including some who were expert in the numeric modeling related to computational fluid dynamics, in a team with computer scientists. Essentially, it turned out that the computer guys did not know enough to weigh in as much as they would have liked. Thus, their role, to their chagrin, many times was "mere" programmer. Some [[knowledge-based engineering]] projects experienced similarly that there is a trade-off between trying to teach programming to a domain expert versus getting a programmer to understand the intricacies of a domain. That, of course, depends upon the domain. In short, any team member has to decide which side of the coin to spend one's time. The International Society for Technology in Education has a brochure detailing an "operational definition" of computational thinking. At the same time, the ISTE made an attempt at defining related skills.<ref>"Operational Definition of Computational Thinking" (for Kβ12 Education) 2011 (via [https://www.iste.org/docs/ct-documents/computational-thinking-operational-definition-flyer.pdf website])</ref> A recognized skill is tolerance for ambiguity and being able to handle open-ended problems. For instance, a [[knowledge-based engineering]] system can enhance its operational aspect and thereby its stability through more involvement by the [[subject-matter expert]], thereby opening up issues of limits that are related to being human. As in, many times, computational results have to be taken at face value due to several factors (hence the [[duck test]]'s necessity arises) that even an expert cannot overcome. The end proof may be the final results (reasonable facsimile by [[simulation]] or [[artifact (observational)|artifact]], working design, etc.) that are not guaranteed to be repeatable, may have been costly to attain (time and money), and so forth. In advanced modeling, with the requisite computational support such as knowledge-based engineering, mappings must be maintained between a real-world object, its abstracted counterparts as defined by the domain and its experts, and the computer models. Mismatches between domain models and their computational mirrors can raise issues apropos this topic. Techniques that allow the flexible modeling required for many hard problems must resolve issues of identity, type, etc. which then lead to methods, such as duck typing. Many domains, with a [[Numerical analysis|numerical]] focus, use limit theory, of various sorts, to overcome the duck test necessity with varying degrees of success. Yet, with that, issues still remain as representational frameworks bear heavily on what we can know. In arguing for an object-based methodology, Peter Wegner<ref>Wegner, P. () "Beyond Computable Functions" ''Specification of Parallel Algorithms'' Page 37 American Mathematical Society (via [https://books.google.com/books?id=vg5RyQ1bcPYC&pg=PA37 Google])</ref> suggested that "positivist scientific philosophies, such as operationalism in [[physics]] and behaviorism in psychology" were powerfully applied in the early part of the 20th century. However, computation has changed the landscape. He notes that we need to distinguish four levels of "irreversible physical and computational abstraction" (Platonic abstraction, computational approximation, functional abstraction, and value computation). Then, we must rely on interactive methods, that have behavior as their focus (see duck test).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)