Not all web tasks are feasible under strict cost and safety requirements, yet standard reinforcement learning implicitly assumes feasibility. This study introduces a feasibility-aware agentic reinforcement learning framestudy that explicitly reasons about whether a task can be completed within given cost budgets and failure risk limits. A feasibility estimator is trained to predict the probability that any valid action sequence exists under current constraints. The agent uses this signal to adapt its strategy, prioritize feasible subtasks, or terminate early when feasibility is low. Evaluation on 800–1,400 constrained web tasks demonstrates that feasibility-aware decision-making reduces wasted interactions, prevents high-risk attempts, and improves overall system reliability. This study reframes web automation as a constrained decision problem where recognizing infeasibility is as important as optimizing success.