User:Sennadacosta

This page intends to present a summary of my research activities on academic collaboration, specially phenomena related to co-authorship activities.... Feedbacks, suggestions, inputs, insights, questions or criticisms are very welcome.

My current research project at the Humboldt University of Berlin has the following scope:

 Bibliometrics / Scientometrics and the 'commoditization of knowledge' in the era of academic capitalism 

Author: Mariano Senna da Costa (MSc)

Affiliations: Institut für Bibliotheks-und Informationswissenschaft / Berlin School of Library and Information Science, Humboldt-Universität zu Berlin, Dorotheenstrasse 26, Berlin, 10117 (Germany)

Type of work: Preliminary report from a PhD dissertation

Keywords: academic activities, academic capitalism, academic communication, co-authorship, collective writing, co-publication, e-science, information management, knowledge monopolies, networking processes, open access, open science, papermania, philosophy of science, publish or perish, research policy, scientific collaboration, scientific openness, scientific networks, virtual research environment, wikitopia

Focus of Interest: Academic publishing policy, Co-authorship of scientific texts and works, Research assessment, Scientific collaboration

Reviewers: Celia Querol

Introduction
"'What is true of the species that live together in a wood is also true of the groupings and sorts of people in a society, who are similarly in an uneasy balance of dependency and competition'. (Gregory Bateson, 'Steps to an Ecology of Mind', Chicago, 1972 - p.437)"

Bibliometrics is a set of quantitative techniques applied to assess scientific papers and publications according to their statistical output. These techniques are used to determine the influence of a writer (e.g. H-index), or a publication (impact factor) in relation to their citation records, number of articles, periodicity of a publication etc, currently being the standard of assessment for most of the scientific institutions in the world (Palmquist, 1996; NIHL, 2010).

There is a trend as well in which these Research Performance Measurements (RPM) are becoming central to the judgement of academic work. As globally measurable and comparable, it is comprehensible to see them as a main method to justify the allocation of research funding and infrastructure investments. The growing importance of university rankings is an evidence and a driver of such tendency (Dempsey, 2010).

On the other hand, these measurements do not consider many factors involved in writing and publishing, such as bureaucratic requirements, institutional context and individual motivations or interests, much less the impact of research to society. Furthermore, the "network metrics" (Yao, 2009) are inaccurate to describe and understand core structures, processes and problems of scientific authoring and publishing (Liang & Zhong, 2013).

According to a report prepared in 2007 on behalf of the Higher Education Funding Council for England, some of the problems of bibliometrics as main form of academic output assessment are:
 * Citations per paper - presents critical aspects regarding accuracy and appropriateness of citation.
 * Citations among disciplines - very different approaches according to each discipline, specially between those from engineering and humanities.
 * Contextual information - metrics do not take into account information about the researchers, do not consider the differences of context across disciplines and countries, as well as they have no proper means to evaluate interdisciplinary works. (Lipsett, 2007)

Most important, taking into consideration the general socio-economic context, in which profit and accumulation of material means are pursued as a fundamental dogma, it is possible to foresee the main risk represented by the almost exclusive application of quantitative techniques to assess the work of scientists. 'Bibliometrics' or 'scientometrics' play a decisive role to certify expert publication (Hilgartner, 2002), strengthening the process in which knowledge and information are turned into a commodity (Levins, 2010).

Quantitative assessment methods are not directly responsible for turning scientific studies into products, but they definitely help to attribute to these attempts a certain aura of objective and universal truth. As a consequence, the entire structure of the scientific establishment is shaped accordingly, with certain areas, topics and projects being favoured in detriment of others with less impressive quantitative records (Münch, 2011).

In other words, Bibliometrics do not even grasp what has been left aside from the current scientific establishment. It does not add any improvement to the scientific endeavour itself, it just reinforces the current way of centralization and institutional control over scientists' work. (NIHL, 2010)

The present paper gathers arguments and evidences showing the early development of such a systemic bias in academic communication, and what risks it involves. It tries as well to indicate how academics can contribute for a better system to assess their works.

Academic Capitalism
It is undeniable that citation index represents a huge advancement brought about by communication technology in recent years (Belbachir & Harik, 2013). “Electronic data systems” allow the quantitative assessment of scientific texts in a global scale. Similarly, it is important to admit that the progress by now is still a tiny part of the potential for improvement, specially regarding the qualitative aspect of communication at research and learning environments (Nentwich, 2003). This assumption is directly proportional to our very limited comprehension of many of the aspects of scientists' behaviour when producing knowledge (Foster et al., 2007).

Furthermore, considering the historical importance of science, it is convenient to ask what kind of metrics would fit better to assess the knowledge production of educational institutions. Trust and credibility are decisive factors for scientific progress, as promotion and tenure are fundamental to a scientist's career advancement. Are they being addressed accordingly? Instead of new “metrics”, we should start seeking new social norms in order to qualify and foster scientific collaboration (Bollier, 2011).

Institutional context is fundamental to the present discussion. It helps to understand why the publication of scientific articles is still in most areas an activity dominated by big publishing conglomerates. Even if the number of publications and personal websites presented a huge incrementation, it is a matter of fact that such pluralization of information sources has been systematically boycotted by institutional and corporate agenda, which seems threatened by the direction of the communication within the WWW (Gatti, 2012).

According to recent literature, the ongoing initiatives and debates about the transformation of the way scientists communicate has had almost no significant impact on fundamental business models of the academic publishing world (PRWEB, 2013). Once more, the current publishing practices and structures have to be considered in order to explain this phenomenon (Young, Ioannidis, & Al-Ubaydli, 2008).

Academic institutions have adopted almost exclusively quantitative assessment mechanisms as a mean to justify administrative decisions regarding infrastructure and funding of research. Also the scientist's career advancement is mainly based on the number of articles published, among other quantitative outputs. Inevitably, the pursuit of certain numerical records is becoming the main reason for academics to write an article.

At the same time, commercial publishers are responsible for the most important titles of academic publications nowadays. They manage the editorial process with focus on the maintenance of loyal readers, who are the root of their credibility. However, at the decision making level, publishers have to balance pure scientific goals with issues regarding the management of the publication (Bhat, 2009 /Goldacre, 2012). The abundant evidence of a "citation game" taking place among editors is one of the effects of this current culture. An analogy between few economic terms used currently to analyse and describe practices of academic publishing illustrates the extension of the problem: (Young, Ioannidis, & Al-Ubaydli, 2008).

Taking the “central importance of context”, we may grasp how this whole mechanism, including the quantification of research, promotes the phenomena of “instrumental thought” in science as described by Gregory Bateson (1942 / 1972). At this point, it is possible to make an analogy to the "Quantophrenia" phenomena. The tendency of social sciences of expressing everything in numbers. It is a way to look serious, like the natural sciences, but it is also a way of avoiding going deeper into the analysis and the understanding of many problems (Askwith Forum, 2013).

The idea is quite simple. Universities, scientists and their publications are increasingly being judged according to their “numbers”. As the common saying goes: numbers speak for themselves. In the sense that, giving too much importance to numerical outputs, and becoming too dependent of merely quantitative assessements, academics naturally get used to put aside their pure scientific goals. Furthermore, in such environment a quantitative standard of evaluation may easily serve as an instrument of manipulation (European Network of Scientists for Social and Environmental Responsibility (ENSSER) & French Fondation Sciences Citoyennes (FSC), 2010).

Accusations of this kind of manipulation have fallen upon scientists with increasingly frequency all over the world. There are many examples in the literature on how this whole process is influencing our institutions, education system and society. They were already named as “Private Science”, “Academic Capitalism” or “Ökonomisierung der Forschung”. It makes generally reference to the conflict in which scientists are keen to sacrifice their very scientific ideals in exchange of a funding grant, or a chance of progression in the career (Kohlenberg & Musharbash, 2013).

Ben Goldacre (TEDMED, 2012) shows clearly how corporate interests, for instance, created a bias where researches pointing good results on drug tests are being favoured in detriment of other investigations presenting side effects of the same substances. At the end the conclusion is clear: even doctors do not know all problems caused by the use of the drugs they prescribe. This has been clearly a matter of editorial choice rather than technical or scientific limitation.

Knowledge Monopolies
The discussion is indeed recurrent. The Canadian professor Harold Innis (The Bias of Communication, 1951) wrote a seminal work about the role of communication and media in shaping culture and the development of civilization. Innis pointed to our “historical lack of interest in educational philosophy” and the “institutional conservatism” traditionally verified in the Western society, as the roots of many challenges facing the academic establishment nowadays.

He coined the term “knowledge monopolies” to explain the trend towards social polarization (knowledge elite X mass of ignorants) and centralization of power (who defines reality and truth). In academic environments this trend is manifested as:
 * Hierarchy struggle
 * Difficulties to reach the sources of information
 * Lack of professionals carrying out communication tasks
 * Conflicts among different areas of knowledge
 * Differences among professional timings.
 * (McGinn, 1991; Soules, 2004)

Some of Innis' statements concerning adult education, universities and scientific texts seem prophetic: "'Mechanical devices become concerned with useless knowledge of useful facts. Complaints of duplication and confusion are inevitable. But I do see all the injury to the higher literature inflicted by the torrential multiplication of printed stuff coinciding with the legal enforcement of mechanical reading - absurdly misnamed Education... the materialised state of mind and morals of modern anarchy, without beliefs, or ideals, or principles, or duties - this is to inaugurate a millennium of vapid commonplace and vulgar realism... The incessant education drill, the deluge of printed matter, asphyxiate the brain, dull beauty of thought, and chill genius into lethargic sleep.... a direct result of our mechanical schooling... is the gradual deterioration of literature into dry specialism and monotonous commonplace ... today we may say - the school has been the death of literature...” (Harold A. Innis, The Bias of Communication, Toronto, 1951 – p. 205)"

The idea of Science "'...science is not a perfect instrument, but by no means ceases to be an excellent and invaluable utensil, which only causes damage when it is taken as an end in itself' (C.G. Jung, introduction to the book The Secret of the Golden Flower: A Chinese Book of Life, 1931 - p. 24)"

The American professor, Neil Postman, (The End of Education, 1996) invokes the very idea of science in order to approach the issue. According to him, one of the main sins of the current scientific establishment is to give a misleading idea that it holds something called "ultimate truth". Postman argue as follows about this: "'Such a belief is, in itself, an instance of the sin of pride, and no self-respecting scientist will admit to holding it...science is a moral imperative drawn from a larger narrative whose purpose is to give perspective, balance and humility to learning.' (p. 68)"

Postman's argument relates to "acceptance of uncertainty" in any scientific endeavour. According to him, the quest for certainty usually leads to the instrumentalization of science in form of dogmas or ultimative formulas ("the knowledge of gods", p. 69), opposite to the idea that human knowledge is always limited. The discussion refers to the biblical legend of fallen angels (Ge 6.1-4), or in analogy to the myth of Sisyphus, the perception that whole truth can not be revealed to our limited human condition, although we keep pursuing it.

Few decades before, Thomas Kuhn (The Structure of Scientific Revolutions, 1962) made a similar point explaining how science advances from one paradigm to another. While admitting that new knowledge may be an advance, he pointed out that it can never be absolute. Only human arrogance could be responsible for such a belief. Evoking the perception of science as a "subjective and irrational enterprise", the former professor emeritus of linguistics and philosophy at the MIT admits his "relativistic" view, while calling upon "the problem of choice" and of "the problems of translation" in order to explain communication barriers among academics (Kuhn, 1970 - p.175).

Depending on the perspective, the sin may be called "pride", "hubris", or "dogma". But in any case, it is the kind of thing that scientists must get rid of it.

The idea of Education

The book "The End of Education" (Postman, 1996) aims to promote a serious debate about the reasons for education, which will afterwards be embedded in the policies and matters of organizations. Nevertheless Postman admits that professors and schools administrators seem the least interested in debating the reasons for education.

According to Postman, the main barrier to the improvement of the academic institutions is that they are structured in separated, and much independent "major subjects". These subjects act and operate as bureaucratic and institutional entities, defining and shaping the way teachers, faculties, publishers, national organizations etc work. Where the "entity" of one subject is licensed to teach that one subject, but not another; (e.g. students are entitled to attend one course, but not another, even if the last one has better answers to the very questions any student can come about during an academic semester). This inflexible structure puts administrative and bureaucratic control above the learning process in a scale of importance of academic activities.

Again, the issue gets back to the current idea of education, where texts books are seen as "enemies of education, promoting dogmatism and trivial learning". In that regard, Postman testifies: "'...there is no sense of frailty or ambiguity of human judgement, no hint of the possibilities of error. Knowledge is presented as a commodity to be acquired, never as a human struggle to understand, to overcome falsity, to stumble toward the truth'. (p. 116)"

Quantity ≠ Quality

Libraries and writing centres, or community-based social enterprises may be the seed of a solution for such a dilemma. But these activities barely count to a formal evaluation system. The British economist, Tim Jackson, connects the dots in his book Prosperity Without Growth (2009). Jackson asserts about the way whole society is locked in a economic cage as follows: "'They (social enterprises) represent a kind of Cinderella economy that sits neglected at the margins of consumer society... These activities are usually labour intensive. So if they contribute anything at all to GDP, their labour productivity is of course dismal - in the language of the dismal science. '(p. 131)"

The former Economics Commissioner for the Sustainable Development Commission (SDC/UK) goes further, explaining why "de-materialized services" are not counted for performance evaluation, and what is the effect. And it is related to our obsession with productivity: "'It's because, in most cases, human input is what constitutes the value in them. The pursuit of labour productivity in activities whose integrity depends on human interaction systematically undermines the quality of the output. Secondly, work itself is one of the ways in which humans participate meaningfully in society. Reducing our ability to do that - or reducing the quality of our experience in doing so - is a direct hit on flourishing.' (Jackson, 2009 - p. 132 - 133)"

Obstacles of Knowledge

"As Harold Laski (1930:102) observed, expertise can be a major obstacle to understanding: 'Expertise, it may be argued, sacrifices the insight of common sense to intensify experience. It breeds an inability to accept new views from the very depth of its preoccupation with its own conclusions. It too often fails to see round its subject. It sees its results out of perspective by making them the center of relevance to which all other results must be related... it has, also, a certain caste-spirit about it, so that experts neglect all evidence that does not come from those who belong to their own ranks. Above all, perhaps,... the expert fails to see that every judgement he makes not purely factual in nature brings with it a scheme of values that has no special validity about it. He tends to confuse the importance of his facts with the importance of what he proposes to do about them'... ...So far, I have emphasized the constraints on learning imposed by academic institutions, disciplinary conventions, and cautious faculty members. Reluctance to deal with core issues or to mount cross-disciplinary efforts are readily explained as the legacies of graduate school indoctrination, professionalized social solidarity, and a peer review system that promotes narrow intellectual cloning over broad critical thinking." (p. 51) I call it the "exclusive character of expertise"

"...teaching is being short changed in order to perpetuate an intellectual caste system that caters to publishable forms of microspecialization, most of it in isolation from broad social and environmental needs... ...The point is that our pedagogy is embedded in a vast system of institutions whose prevailing standards are based on the professional and economic profit from research rather than the social profit from teaching. Instructors who attempt to challenge these standards with innovative courses quickly discover that their professional status may suffer." (p. 55) (Maniates, 2003)

END OF PART TO BE EDITED

Papermania
Evidence of such deterioration can be verified in different places, although, considering the ubiquity and instantaneous character of the WWW as a global communication platform, it is reasonable to see similar effects everywhere.

//TO BE EDITED//

A critical point of view of the publishing system

The laureate with the Nobel prize of medicine 2013, Randy Wayne Schekman, accused some of the most important journals in the world (Nature, Science and Cell) of impeding the development of science in order to guarantee their editorial interests. Among the pernicious practices the publications were accused there are the "artificial reduction" of the quantity of articles accepted, hyped criteria for selection of topics and texts, and a complete disengagement with the qualification of the scientific debate. "The science must get rid of the tyranny of the "luxury journals", declared the scientist in an article to the Guardian newspaper after the announcement of the Nobel prize. He defended a research and publishing system that serves better to the interests of society.

A critical point of view of the ranking system

The CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior), responsible for the accreditation and evaluation of graduate programs all over Brazil, attributes a decisive weight to the publication of articles in well ranked journals like the Nature. On Anthropology, for instance, 40% of the entire assessment weight is due to publishing. It is to say that, in order to receive a good grade, any program must pressure its staff to publish the results of research on publications ranked on the Qualis index as A1, A2, or B1. In numbers, each scientist must publish at least two papers in such journals, not to mention other kinds of publications like books, technical reports and conference posters. The main problem to the Brazilian scientists in relation this system is that in order to be indexed as A1 or A2, the publications must be classified on international indexes. And this represents a weird form of dependence from the dynamics of the international publishing market.

According to scientists in many scientific disciplines, this is a wrong way to promote the internationalization of the Brazilian research, privileging foreign publications greatly financed by private interests pegged to intellectual property defenders, over the national publishing system constituted mostly by journals published by public universities and associations.

Productivism

Some effects of this policy are: - Deterioration of intellectual culture oriented towards scientific progress and innovation (i.e. collaborative and participative practices of investigation); - Obliteration or inferiorization of certain kinds of knowledge in front of others, considered more efficient, productive or simply appropriate in a specific context or perspective (i.e. indigenous methods, concepts and knowledge). The phenomenon also called "Epistemicide"; - Reduction of the diversity of ideas and the intellectual creativity, with consequent legitimization of hegemonic strategies and policies intended to justify economic and political power (i.e. distribution of resources and funding); - Maintenance of a colonial vision of the science, where the new and the authentic tend, without almost no exception, to be imported (i.e. publishing indexes and evaluation criteria); - Publishing to publish, or production for the production, disconnected to any compromise to solve real problems of the society;

(Martins Moraes, 2014)

Increased academic productivity = increased scientific bias

Besides the "publish or perish" culture, academic environments are pervaded by a context where individual achievement and success is practically the only way to assess research work....

The author of the article concludes: "...competitive academic environments increase not only scientist's productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high".

Notes

"Te objectivity and integrity of contemporary science faces many threats. A cause of particular concern is the growing competition for research funding and academic positions, which combined with an increasing use of bibliometric parameters to evaluate careers (e.g. number of publications and the impact factor of the journals they appeared in), pressures scientists into continuously producing "publishable" results."

Theoretically competition is a vector to efficiency and productivity on science... but: "...In many fields of research, papers are more likely to be published, to be cited by colleagues and to be accepted by high-profile journals if they report results that are "positive"..."

Regarding the philosophical background of the article's critic, the text states: "Words like "positive", "significant", "negative" or "null" are common scientific jargon, but are obviously misleading, because all results are equally relevant to science, as long as they have been produced by sound logic and methods."

The article attributes the mentioned "publication bias" to the roots of science's philosophy and sociology, but also to a fact that is commonly ignored: "Like all human beings, scientists are confirmation-biased (i.e. tend to select information that supports their hypotheses about the world), and they are far from indifferent to the outcome of their own research: positive results make them happy and negative ones disappointed. This bias is likely to be reinforced by a positive feedback from the scientific community"... this statement seems to underestimate the influential power of the context, since the community can be the major influential factor to such behaviour...

"...careers are evaluated by counting the number of papers listed in their CVs and the impact factor of the journals they are published in..."

Another arguments can be to a certain extent linked to the idea of "Knowledge Monopolies": "Quantitative studies studies have repeatedly shown that financial interests can influence the outcome of biomedical research but they appear to have neglected the much more widespread conflict of interest created by scientists' need to publish. Yet, fears that the professionalization of research might compromise its objectivity and integrity had been expressed already in the 19th. century. Since then, the competitiveness and precariousness of scientific careers have increased, and evidence that this might encourage misconduct has accumulated".

One of the weaknesses of the study is its exclusive focus on quantitative techniques. This is causing the author to almost dismiss many important qualitative arguments and critiques of certain practices: "Surveys suggest that competitive research environments decrease the likelihood to follow scientific ideals... no direct quantitative study has verified the connection between pressures to publish and bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate. Results

"The probability of papers to support the tested hypothesis increased significantly with the per capita academic productivity of the state of the corresponding author... "

Productivity is a statistically significant predictor, controlling for expenditure...

"...in US states where researchers publish more papers per capita were significantly more likely to report positive results, independently of their discipline, methodology and research expenditure... results supports the hypothesis that competitive academic environments increase not only the productivity of researchers, but also their bias against "negative" results.... the effect is truly cross-disciplinary. "

The study would have problems if any one asks about the definition of certain terms, like "intellectual environment": "An unavoidable confounding factor in this study is the quality and prestige of academic institutions, which is intrinsically linked to the productivity of their resident researchers. Indeed, official rankings of universities often include parameters measuring publication rates (although the validity of such rankings is controversial. Therefore, it could be argued that the more productive states are also the ones hosting the "best" universities, which provide better academic structures (laboratories, libraries, etc...) and more advanced and stimulating intellectual environments".

It is about a self-reinforced system: "Separating this quality-of-institution effect from that of bias induced by pressures to publish is difficult, because the two factors are strictly linked: the best universities are also the most competitive, and thus presumably the ones where pressures to produce are highest."

Cultural aspect with 2 different bias: 1) regarding missing negative results 2) regarding prestigious researchers and institutions: "... journal editors tend to reject papers from low-income countries unless they have particularly "good" results. If there were a similar editorial bias favouring highly prestigious universities in the US - and some studies suggest that there is - then the more productive states (prestigious institutions) should be allowed to publish more negative results.... the missing negative results? ... went completely unpublished or were somehow turned into positive through selective reporting, post-hoc re-interpretation, and alteration of methods, analysis and data. The relative frequency of these behaviours remains to be established, but the simple non-publication of results is unlikely to be the only explanation".

The article suggests that pressure to compulsory publishing has clear effects on research work itself and the communication of its results: "... positive results should be treated with the same scrutiny and rigour applied to negative ones, but with all likelihood they are not. This latter form of neglect is probably one of the main sources of bias in science".

"...the detrimental effects of the publish-or-perish culture could be manifest in other countries around the world". (Fanelli, 2010)

The luxury-journals tyranny

Despite ideological misinterpretation, the argument of Randy Schekman is clear and can be put as follows: - Place of publication is not a proxy for quality of science; - Outstanding research do not only appear in outstanding publications; - Well ranked journals target primarily the selling of subscriptions than stimulating research work; - Papers and journals cited more often (impact factor) are not necessarily the better ones; - Papers', Authors', Journals' and Institutions' score say little about the quality of research itself; - "Sexy" subjects or challenging claims are preferred by editors for editorial interests (attraction of readers attention), not for their contributions to any scientific subject; - This publishing system is influencing the science that scientists do;

Possible way out "There is a better way, through the new breed of open-access journals that are free for anybody to read, and have no expensive subscriptions to promote. Born on the web, they can accept all papers that meet quality standards, with no artificial caps. Many are edited by working scientists, who can assess the worth of papers without regard for citations. As I know from my editorship of eLife, an open access journal funded by the Wellcome Trust, the Howard Hughes Medical Institute and the Max Planck Society, they are publishing world-class science every week.

Funders and universities, too, have a role to play. They must tell the committees that decide on grants and positions not to judge papers by where they are published. It is the quality of the science, not the journal's brand, that matters. Most importantly of all, we scientists need to take action. Like many successful researchers, I have published in the big brands, including the papers that won me the Nobel prize for medicine, which I will be honoured to collect tomorrow.. But no longer. I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.

Just as Wall Street needs to break the hold of the bonus culture, which drives risk-taking that is rational for individuals but damaging to the financial system, so science must break the tyranny of the luxury journals. The result will be better research that better serves science and society." (Schekman, 2013)

//END OF PART AWAITING FOR EDITION//

An investigation conducted by me during the first semester of 2012 showed the effects of the enforcement of international quantitative standards as the main method to evaluate academic work in a specific environment. The target group in this case was the staff of the School of Forest Engineering (CIFLOMA) at the Federal University of Paraná / Southern Brazil.

The answers of a survey in combination with an in-depth interview applied to 46 researchers were almost unanimous: the prevailing form of research assessment (mandatory requirement of a minimal number of articles published in well ranked journals according to impact factors) is promoting a decrease in the significance of the research output. Several interviewees refer to this phenomenon as “papermania” or “publishing to publish” (in allusion to the “publish or perish” trend (Wikipedia, 2010)).

While submitting their texts almost exclusively to traditional publications with higher impact factors, the scientists at the CIFLOMA declared to be aware that these publications make digital copies of the articles to be available at libraries, institutional portals and other sites on the Internet.

Therefore, the online publication is only due to the publication on traditional journals, and at the end, this renders another reason for focusing the publishing efforts on these publications. In other words, the publication of articles has a compulsory, though exclusive character, and does not target the democratization or popularization of expert knowledge. Academic publishing is currently done almost exclusively to attend bureaucratic and administrative requirements. The obvious main reason is the competitive standard of assessment for promotion and tenure based in the number of published papers (CAPES, 2004).

Another evidence is the lack of conviction that knowledge must be shared (Stallbaum, 2005), represented by the resistance in publishing work in progress publicly (Table 1).

Table 1. Do you publish your "work in progress" in its early stages?

Note that, apparently the interviewees are not afraid of the openness embedded in digital communication. Only few of them justify this “resistance” against a more open approach towards data and research with “fear” in front of plagiarism threats, or any copyright infringement (Table 2). Most attributed the attitude to the mandatory requirement of submitting original papers, as an archaic demand from all relevant publications in the field. “It is simply the standard procedure not to expose the work or material under construction or analysis”, commented one of the interviewees.

Table 2. To what extent do you agree that other researchers are likely to copy or steal your ideas and work if you publish them online? The testimony of professor Nivaldo Rizzi, who was dean at the UFPR and manager of research funding policy for the Parana State for 10 years, addresses directly the issue concerning the lack of institutional policy towards communication services and a more efficient collaborative process among academics: "I am critical to any publication ranking being used nowadays. First of all, they do not represent our country's reality. Even if you take a single area, avoiding the comparison among them, you will see that it is structured in a corporative way. The list QUALIS, for instance, in our field (agricultural sciences), you will find there publications with lower quality, in terms of content, but with a better position in the ranking. In my view some of reasons for this bias is that the most ranking systems are based on a calculation system (index) that prioritizes data-bases conglomerates, and this is so because it is done by a private company... it means, it is unavoidable to have there a corporative bias... the magazines on the CAPES data-base, for example, are indexed according to the ISI index system, which is developed by a big private conglomerate. It is easy to see that if they do a business to it, they would inevitably manipulate their product in order to attend their interests. Our institutions nowadays work with the idea of "knowledge dominance", and not with the principle of knowledge sharing or collectivization of knowledge... it is a competitive standard... among individuals and institutions. This creates an entire set of behaviours accordingly, including the principle of competition which generates an attitude of secrecy towards data and information... the policy regarding research evaluation in Brazil nowadays reinforces this obsession for a competitive edge, it does not try to change this vicious circle... it is a philosophical matter, but it has a huge impact on how we work and are used to relate to each other. We miss a policy guideline to counter the current academic culture". (Prof. Nivaldo Rizzi, CIFLOMA, UFPR )

New Media = New Culture
The Internet presents a structural change on how texts, information and knowledge are produced, managed and distributed. Thus, it is reasonable to think about a structural transformation of the academic communication, including a cultural change on the production and assessment of texts. The Open Science Movement (Waldrop, 2008) seems to suggest a first step towards such change. Since communication is the heart of science, making it more open, logically appears to be the best way to improve connection with colleagues, create new knowledge and promote scientific progress (Mittler, 2007).

However, in many areas scholars remain sceptics regarding the so called scientific openness. The main reason again points to the competitive framework verified in those fields (e.g. ranking systems). It is to say that scepticism towards open science is directly proportional to academic rivalry, represented also by the importance of patents and intellectual property for funding accessibility, promotion and tenure. A report published by the Joint Information Systems Committee (JISC-UK) is clear about the issue: "'The traditional journal publishing model... does not reward data sharing, social Web contributions or peer production approaches to data curation... it is clear that the lack of incentives for data sharing and participatory methodologies, are a barrier to the wider adoption of the open science agenda.'(Liz Lyon, Open Science at Web Scale, JISC-UK, 2009 – p. 08)"

Institutions play a decisive role in this process, and normally act by limiting people's empathic feeling according to social, cultural or ethnic differences, introducing the base for separation and competition among peoples and groups. Therefore, currently the challenge is to de-naturalize the “social egoism” imposed by our institutions (Azzi, 2013).

Scholars have been challenged to transform their approach toward knowledge and information. This should be a new way, beyond political and economic convenience of established institutions. “We need to re-shape the entire academic culture”, defended the German mathematician and professor Martin Groetschel, during the Open Access Week (October, 2012) in Berlin, Germany. As an experienced OA activist, he believes that the solution for the bottle neck of the scientific production, today mostly concentrated in the hands of big publishers, depends on the scientists themselves. They have to find more transparent and democratic ways to deal with their own production, and these must be ways of true connection, real progress and unavoidable sustainability.