In Chris Sims' post, he summarized the experiences of David Starr of dealing with unfinished Stories.
One way to track progress is to give 80% of the point value of the story to the team for the current sprint. At first blush, this approach seems to accurately reflect the state of things, and may help keep the team's recorded velocity from varying up and down, sprint to sprint. It also has a certain amount 'feel good' value for the team. However, this approach has significant risk. The story is not verifiably done and the amount of time and effort that will be needed to get to 'done' isn't really known.
A second possibility is to split the story into smaller stories and take credit for the ones that can be considered done. To the extent that some of the smaller stories are truly 'done', this can reduce the risk associated with the 'partial credit' approach. It also allows the product owner to make some decisions about the relative importance of the unfinished stories.
In our projects, we seldom met the same issue. Most of the time, we can finish the stories on time, but sometimes we will also find that we were getting trouble on a certain user story. Most of the time, the issue is, the user story is developed, but it's not the same as expected of user. It's a common issue that user doesn't really aware of what they need. But in a formal development team, we must count our efforts so that we can really know our TRUE velocity in a certain cycle. In this case, usually we will adopt the second approach of David. We will split the user story into smaller user stories, and keep higher priority for unfinished user story in the next iteration.
But here comes my question, if the user story is finished, but we found it's not perfect enough for user, what should we do?
To satisfy user is one of the most important thing we should be aware of in an agile project. But most of the time, user only say "I need something but I don't know what it exactly looks like, I need your help". You can not exactly define what's the final quality requirement of user. So, you will get the feedback like "Yes, it works, but I think there are something we can improve, but I don't know where they are...".
How you deal with this in your experiences?