ranking in indian education systems

The Unranking: Moving Away from Ego towards Learning for All

Modern “ranking” of students comes from the 18th–19th century push to standardize assessment in mass schooling. In 1785, Ezra Stiles, Yale’s president from 1778 to 1795, introduced the first documented formal grading system in the new world. In a diary entry, Stiles wrote in April of that year contained a system for sorting students based on their performance on oral exams; by the late 1800s to early 1900s, points and letter grades spread widely as schools tried to sort students efficiently at scale. 

In India, public prestige around rank is tied to examinations that grew out of colonial recruitment and credentialing. The Indian Civil Service examination which used to be held only in London in 1855 evolved into today’s UPSC, with ranks conferring life-changing status along with a cultural logic of “rank = worth” taking root. This was generalized to school boards, IIT-JEE, NEET, etc.

There are signs of change, India’s National Education Policy (NEP) 2020 urges a shift from high-stakes summative exams to formative assessment. As an example, CBSE has now stopped publishing board “merit lists” to reduce unhealthy competition. But the habit of public ranking persists around selective entrance tests and media coverage.

What science says: three strong lines of evidence

Below are well-established experimental findings (several randomized; fully double-blind designs are rare in classrooms, for obvious reasons).

1) Grades and rankings lower your natural drive to learn, while feedback focused on the task at hand helps keep that motivation strong.
In classic randomized classroom experiments, Butler & Nisan (1986) and Butler (1988) assigned students to receive either (a) comments only, (b) grades (scores), or (c) grades + comments. Students given comments (task-involving feedback) showed higher interest and better performance. On the other hand, those given grades with or without comments showed reduced interest and, on many tasks, worse performance. This is precisely the motivational channel ranking relies on.

Research by Deci, Koestner, and Ryan (1999) showed that expected rewards and performance-based evaluations greatly reduced a person’s natural desire to learn (by 30-40%). Although not directly about ranking, ranking acts like a form of control or ego-boosting evaluation, which similarly reduces the desire for self-driven learning.

2) Competitive approaches don’t work as well as cooperative ones.
Studies show that learning together (cooperative learning) works better than competing or working alone. Ranking is a competitive method, but evidence suggests cooperation is better for learning and relationships.

3) Formative assessment and high-quality feedback are more effective for learning than ranking.
Research shows that giving students clear goals, checking their understanding, and providing useful feedback helps them learn better. Good feedback tells students where they’re headed, how they’re doing, and what to do next. Public class rankings don’t do any of these things.

What about “leaderboard” effects?
Field experiments with relative-performance feedback sometimes find short-run boosts for top performers but neutral or negative effects for middle and lower performers (e.g., widening achievement gaps, increased anxiety), which is problematic in school cohorts meant to educate all students. The broader synthesis above remains: rank motivates comparison, not learning. Articles on the mental health crisis related to prestigious exams and their ranking systems reflect this. 

Why ranking is uniquely harmful in schools

It converts learning into status. Rank is by definition norm-referenced, i.e., your position depends on others doing worse. That channels attention from mastering content to managing impressions and avoiding failure well-documented drivers of “ego-involving” motivation. 

It amplifies stress and narrows curricula. Indian reporting around NEET/JEE highlights sleep problems, panic, and persistent anxiety. High-stakes rank pushes coaching cultures toward test-taking tricks and syllabus triage, crowding out exploration, projects, and collaboration.

It increases inequality of experience. Competitive ranking yields “winner-take-most” dynamics: the already-confident benefit from comparison; many others disengage. Cooperative, mastery-oriented classrooms show the opposite pattern.

It is weak feedback. A rank says nothing about what to fix. Formative feedback, by contrast, is specific, task-focused, and future-oriented, the combination linked to sizable learning gains.

Common counter-claims (and the evidence)

“Competition toughens students for real life.” Competitive experiences are inevitable but schooling’s core task is learning. The comparative evidence is consistent that cooperative/mastery structures produce higher average achievement and better peer relations than competitive structures in classrooms. You can teach resilience without organizing learning as a zero-sum race.

Even in evolutionary sciences, while the idea of “survival of the fittest” often evokes images of brutal, individual competition (a “zero-sum race”), many successful animal species rely on cooperation to secure resources and thrive, demonstrating that optimal outcomes aren’t always achieved through intense individual competition.

The inter-species relationship between honeyguide birds (Indicator indicator) and human hunter-gatherers (like the Borana people) serves as a potent analogy for cooperative learning over a purely competitive zero-sum approach. In this natural partnership, the shared goal is accessing a wild beehive. If either species attempts this alone, the result is often failure: the bird can locate the hive but can’t break into the protective structure, and the human wastes significant energy searching for a hidden nest. This individual effort mirrors the low average achievement of a competitive, zero-sum structure. However, through collaboration, the bird uses its specialized call to lead the human to the hive, and the human uses smoke and tools to safely access the honey. Both partners gain a vital resource, the human gets the honey, and the bird gets the exposed wax and larvae, a clear win-win. This cooperative strategy dramatically increases the success rate, demonstrating that cooperative/mastery structures (working together to achieve a shared mastery of a difficult task) are far more efficient and productive in achieving high, reliable outcomes than individualistic competition.

“Rank is needed for selection.” Selective programs can use criterion-referenced cut-off scores and multi-measure profiles (portfolios, structured tasks, interviews) without public ordinal ranking. Even in India, CBSE has stopped public “merit lists” at the board level to curb unhealthy competition proof that selection and public ranking are separable.

What to do instead (the expert-backed alternatives)

1) Swap out rankings based on comparing students to each other with a system that reports how well they meet clear, public standards. Instead of ranks, say if students have met, exceeded, or are approaching these standards. This approach, which includes examples and next steps, focuses on learning goals rather than competition among students. It also fits with NEP 2020’s focus on competency-based learning and using assessment to help students learn.

2) Schools should prioritize ongoing, in-class assessments. Implement established assessment-for-learning techniques such as clear success criteria, student self-assessment, exit tickets, hinge questions, and rapid feedback. Research by Black & Wiliam, and subsequent applications, consistently demonstrate improved student outcomes when teachers restructure classroom assessment practices in this manner.

3) Give task-involving feedback; de-emphasize grades.
When feedback is specific to the work and suggests actionable revisions, motivation and performance rise; when it’s ego-involving (scores/ranks), they fall. Train teachers to write comments that answer Hattie & Timperley’s three questions; delay or hide grades during revision cycles.

4) Structure cooperative, mastery-oriented tasks.
Use well-designed cooperative learning (clear group goals + individual accountability), which meta-analyses associate with higher achievement than competitive structures.

5) Keep selective gateways, but redesign how signals are sent.
Where limited places necessitate selection (e.g., in medicine or engineering), prioritize criterion-referenced cut-off scores, offer multiple low-stakes attempts, and report performance in bands rather than precise ordinal ranks. Instead of publishing “AIR 73,” present comprehensive profiles detailing skills achieved.

How Indian systems can adapt 

We can acknowledge the fact that the student population significantly outnumbers the available resources, which are severely limited by budgetary constraints from both governmental and non-governmental authorities. Despite this disparity, we can still proceed with implementing the proposed measures.

School Boards (CBSE/State):

  • Maintain the no-merit-list policy for all public communications and discourage any local ranking, public announcements, alum banners, or “toppers’ parades.”
  • Release examples of what students should know and be able to do in each subject and grade, and make schools use them in report cards. This aligns with new education policies that promote skill-based learning.
  • Provide teachers with more training on formative assessment (assessment for learning) by using resources from Indian universities and NGOs to make the practices relevant to local contexts.

Entrance Examinations (NTA/IIT-JEE/NEET/UPSC):

  • Instead of public all-India ranks, use score bands and competency profiles (e.g., “Biology Data Analysis: Band 3/5”). Share exact scores privately, and only make cut-offs and overall score distributions public.
  • Offer multiple annual sittings with highest-score counting and item pools that emphasize higher-order skills; this reduces the one-shot pressure documented in Indian coverage. 
  • Have independent experts check how fair your ranking system is for different genders, languages, and regions, then share their methods.

Universities/Colleges:

  • Grade courses based on clear standards (pass/merit/distinction) and use detailed reports that show projects, research, and skills.
  • Tell applicants that portfolios, interviews, and real-world tasks are important, making a single rank less crucial.

Media & Coaching Sector:

  • When exam results come out, focus on good study habits, mental health support, and career paths, instead of just league tables. Indian media talks about stress but rarely links it to how evaluations are designed, this connection needs to be made clear.

Bottom line

Ranking children is not just a benign tradition; it is an educational intervention with well-studied side effects: lower intrinsic motivation, higher anxiety, and wider gaps. Experiments and meta-analyses over four decades converge on a better recipe: cooperative/mastery structures, formative assessment, and specific feedback. 

Leave a Reply

Your email address will not be published. Required fields are marked *