Papers
arxiv:2602.02488

RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System

Published on Feb 2
ยท Submitted by
Ling Yang
on Feb 3
Authors:
,
,
,
,

Abstract

RLAnything enhances reinforcement learning for LLMs and agents through dynamic model optimization and closed-loop feedback mechanisms that improve policy and reward model training.

AI-generated summary

We propose RLAnything, a reinforcement learning framework that dynamically forges environment, policy, and reward models through closed-loop optimization, amplifying learning signals and strengthening the overall RL system for any LLM or agentic scenarios. Specifically, the policy is trained with integrated feedback from step-wise and outcome signals, while the reward model is jointly optimized via consistency feedback, which in turn further improves policy training. Moreover, our theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each, enabling learning from experience. Empirically, each added component consistently improves the overall system, and RLAnything yields substantial gains across various representative LLM and agentic tasks, boosting Qwen3-VL-8B-Thinking by 9.1% on OSWorld and Qwen2.5-7B-Instruct by 18.7% and 11.9% on AlfWorld and LiveBench, respectively. We also that optimized reward-model signals outperform outcomes that rely on human labels. Code: https://github.com/Gen-Verse/Open-AgentRL

Community

Paper author Paper submitter
Paper author Paper submitter

image

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

In eq.1 it says ๐‘…๐œ๐‘– is O๐œ plus the average of m queries to the reward model. Assuming O๐œ applies at every step i, then ๐‘…๐œ๐‘– can be above 1 or below -1 right? Like a good step on a positive trajectory should be above 1, and a bad step on a negative trajectory should be below -1.

But in section 2.2 it says ๐‘…๐œ๐‘– โˆˆ [โˆ’1, 1]. How is that possible? Or is the implication that O๐œ is zero until the final step?

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2602.02488
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.02488 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.02488 in a Space README.md to link it from this page.

Collections including this paper 10