Skip to content

Case study: Integrating GenAI into assessment on the ‘Business Planning and Sustainable Entrepreneurship’ module

School for Business and Society
Claire Sinclair

Claire Sinclair outlines how the use of Generative AI was incorporated into a key assessment on a large core second year module called Business Planning and Sustainable Entrepreneurship. The first component of the module assessment involved students working in groups to develop business plans aligned with the United Nations Sustainable Development Goals. For the second component, they then produced an individual reflective essay critically appraising their own contribution to the team work. Using the School’s framework to define and communicate acceptable uses of GenAI, she informed students that full use of GenAI was permitted for the first component but that use of GenAI would be unacceptable for the second component.

Claire outlines the rationale for the use of GenAI which was initially considered as a response to student accusations in previous iterations of the module that fellow group members were cheating via GenAI in group work outputs. The rationale for allowing full use of GenAI for the business plans was to mimic real-world entrepreneurial resourcefulness, prepare students for employer expectations, level the playing field regarding access, and potentially ease workload which was perceived by students to be relatively high on the module.

She then describes the support sessions and ‘rules of engagement’ that the module team put in place to encourage students to use GenAI critically and appropriately. Students were introduced to the assessment in a dedicated workshop and told that they should use institutionally licensed tools (Gemini or Copilot), avoid sensitive data, and include all prompts in the appendices to demonstrate critical engagement.

Claire reports that the pilot successfully eliminated cheating accusations and did not impact overall grades. The module team found that critical engagement was somewhat limited and they intend to further develop the workshop sessions in future to further support students’ critical engagement with GenAI.

She ends by offering advice to staff who may be considering incorporating use of GenAI within their own assessments, suggesting that a useful start point is to spend time working with GenAI to produce and refine responses to their own assessments in order to learn about its capabilities and limitations, and to plan accordingly.

Watch their presentation:

Integrating GenAI into assessment on the ‘Business Planning and Sustainable Entrepreneurship’ module (Panopto viewer) (23 mins 47 secs, UoY log-in required)

Transcript

So. Hi there. My name is Claire Sinclair. I'm a lecturer in management at the school for Business and Society. I'm going to just very briefly share my experience in using generative AI as part of our assessments here at SBS.

Let me first tell you just a little bit about the module that I run. So this is a module called Business Planning and Sustainable Entrepreneurship, which I co-lead with my colleague Catherine Botting. It's a mandatory module, so a core module for our second year in the undergraduate programmes across a range of different disciplines. So we've got business and management students, marketing students, and finance and accounting students. And it's a large module. So we have around about 400 students and growing in our cohort. But during the module, they work in teams, usually teams of about 6 or 7 students. And during the module they come up with their own idea for a new venture. And it has to be a new business that's fully aligned with the United Nations SDGs (Sustainable Development Goals). And so solving some kind of sustainability problem. And they come up with two summitives together. So the first one is a group work project. It's a 4000 word business plan. So that's done usually in Google Doc form, and they're working on it together to fully research their business. So they are speaking to potential customers. They're researching the market, they're looking at their competitors. They're often speaking to suppliers. So they're doing primary and secondary market research. They come up with four years of projections for their financials. So we try to make it as realistic as possible, everything fully underpinned with research and data. And then finally, individually they work on a reflective essay. So that's 1500 words. And during that time they critically appraise their own contribution to the team work.

And it's an experiential module. So we try really hard to closely simulate the real life business planning process. So the processes and the steps that an entrepreneur would genuinely go through if they were starting to plan their business. And so in that sense, we are - we're not so much teaching students about entrepreneurship, but we're really teaching through entrepreneurship. So entrepreneurship is a pedagogical tool that we use to support the students to develop the skills that they will need in the future for whatever employability path they go on. So they're developing collaboration, teamwork skills, communication skills, they're learning to work across diverse teams, diverse cultures, developing leadership skills and so on and so forth.

And about two years ago, we had a problem. And that problem was that student teams were practically lining up at my door to tell me that fellow team members were cheating using genAI tools, which had just been launched, and they were accusing one another of cheating, saying that, you know, someone had used ChatGPT to generate their section of the business plan, and they were really expecting me to step in and somehow miraculously prove that, yes indeed, they'd been using AI and chuck them out of the team or do some kind of policing, um, of the team work and the use of AI. And as I think we all know now, there's no reliable AI detectors. There are AI detectors out there, but certainly nothing that we could use reliably in terms of academic conduct. So we had a problem. And the way that we dealt with it was really to turn it on its head. And I think probably we came up with an answer that students didn't expect. And by, well, September 2024, we had permission from the Academic Conduct Committee to do a pilot whereby we would give student teams complete freedom to use GenAI tools for the business plan. So not for the reflective essay, because in the reflective essay, we really want to hear the authentic voice of the individual. But for the business plan, we said that you have complete freedom to use GenAI both for researching your business plan and for writing your business plan in your teams.

And there was a strong rationale, a number of different reasons kind of sitting behind this. You know, as an experiential module we wanted to mimic, okay, well, how will entrepreneurs use GenAI now that these tools are available? And we know that entrepreneurs are resourceful. If you're bootstrapping a business, you're going to use everything available to you, whether that's people and skills or tools or other resources. So entrepreneurs we know are absolutely using generative AI to build business plans. Um, equally, there was some literature starting to emerge that employers are expecting our students coming out of university to have skills, not just knowing how to use the AI tools, but also understanding appropriate conditions and contexts in which they could use them. And think critically about how and when to use them. But we'd also noticed that some of our students who were paying for access to GenAI, they were forging ahead and really starting to use those tools, perhaps in quite a sophisticated way, whereas others, perhaps for affordability reasons or other reasons, were not accessing those tools at all. And we were concerned about that. We wanted to level the platform from that perspective as far as possible. But we also had a particular piece of feedback on this module that it's quite a heavy workload. Students tell us that compared to other 20 credit modules, they find this module quite heavy going in terms of the amount of work and teamwork we know does help students feel that they are working, working hard in terms of producing content. So I wanted to test could this be something that helps students feel that perhaps their workload is a bit more manageable? It's just another tool to help them.

So, in SBS, we have a framework that's been developed around helping to communicate at each modular level, where we sit in terms of what is acceptable AI use on each module. So for some modules like my module we're, as you can see in the yellow box, we are enabling full GAI, so students can use it as creatively as and as collaboratively as they wish. And they can use it for brainstorming, research, writing, there's no particular limitations, although there are some rules which I'll come to in a second.And this framework is something that we're about to launch in SBS in September 2025 to try to communicate where each module sits, knowing that it's not going to be appropriate for every module to enable full GenAI - there'll be very very good reasons for some modules not have any GAI allowed at all. So this framework is something that we're trialling and that we hope will give us a kind of standardised mechanism to be able to really clearly communicate to students, both in class but also in the assessment briefs, what appropriate usage looks like at an individual modular level.

So I mentioned that I had some rules on and the use of, uh, GenAI on my module. So we explained really clearly to students that, um, they were only able to use the tools for which the university has an institutional license, which at that time was co-pilot. We've now got Gemini as well, which I'm really pleased to be able to roll out to students this year. And whilst there's nothing that we could do to prevent, technically, students, you know, if they were paying for ChatGPT, I couldn't, you know, go round each individual, um, laptop to see what people were using. But we did explain why, from a sort of data integrity and data protection standpoint, it was important it to log in with your institutional enterprise ID and use enterprise - the enterprise or institutional licenses, or licensed tools. We again also really emphasised not to include any sensitive or personal information into the tool and again explained why and showed some examples of what can go wrong if you do start to add information that you don't want to be shared into the tool. We asked students to include all their prompts in the appendices, and that was really important for us to help with the marking, to really be able to see, you know, to what extent they actually critically engaged with GenAI. And we really emphasised to the students that, yes, you can go ahead and use the GenAI for as much or as little as you want, but to get the best marks, we need to see evidence that you are critically engaging with the tool. Not that you're just taking the first answer that it gives and just pasting it into the document.

So we had a couple of sessions with students to talk about GenAI why we were using it, what the rules were, doing some technical checks to make sure that everybody had access to co-pilot. So we did a couple of sessions. The first one was kind of a 30 minute discussion within a seminar, and it was really interesting because I think some students, well, many of the students I would say were at first quite nervous to share how much or how little they were using GenAI. So I think up until that point generally on their program, it was considered a bit of a taboo that you certainly didn't talk openly about with your academic staff. And here we were asking them to share, you know, what tools are you using? What do you find useful for? What do you find it not useful for? So I think that was a really healthy and important discussion, which was to create a safe space to say we want to develop these skills. And actually, you know, hands on heart. I'm not an expert in AI either. So I think we were all kind of co-creating this content to this extent - to a certain extent as well. So that kind of sowed the initial seeds of what we were trying to do.

And then the second session, which I think is by far the most important, is we did a 90 minute workshop, a very active workshop with the whole cohort where we talked about critical - It was really focussed on critical engagement and what we mean by critical engagement with the tool. So we the first exercise that we did was we had the students take the assessment brief and put that into co-pilot to basically try to get it to write a business plan first up. And we then had them mark that business plan using the rubric for our module. And that was a really useful exercise because first of all, they thought, brilliant it could write a business plan in about two minutes, and then they really very quickly realised that that was going to be a business plan that, using our rubric, probably wasn't going to pass or maybe get a third. They all sort of graded themselves I think a third was that the very highest mark that any of them gave themselves? So I think that was kind of a shocking moment, I suppose, within the teaching in which we start to say, okay, so now that it doesn't mean that it can't do it, it means that we have to have this conversation back and forth. We have to re-engineer our prompts. We have to really focus AI. We have to direct it because we're in control of the tool. It will only do what we ask it to do if we if we're very, very clear in terms of the focus that we want for it. Um, we, um, yeah. So then they did some more work in terms of they took a particular section of the business plan. They looked at the Pestle analysis, and they tried to then critique what the AI had written, rewrite it, rewrite the prompt, and just repeat that process. And we tried to explain that that circular process of going back, rewriting the prompt, editing, editing again, checking everything, looking at all the sources that that is what we mean by engaging critically with it. And we followed lots of different steps, you know, looking at the secondary data that it was coming up with, chasing down, working out how we could figure out whether it was a reliable source or not. So it was a really important and productive set of conversations. So down the line, as they were writing their business plan, they used - and I'd say experimented AI for all sorts of different aspects of the business plan. They used it for doing some initial brainstorming of their ideas when they were completely stuck for ideas of what their business could or should be. They did use it for a lot of the drafting and editing work. So for some initial research as well, to get some initial sources that they could then go ahead and kind of carry out the desk based secondary research. They used it for certainly for structuring, in my teams that even though it's a 4000 word essay, often what they'll do is they go way over the word count. So when teams go off work individually and then piece it all back together, sometimes they've got 10,000 words and they really worry, like what do I need to cut out? How can I better structure this? And I think it was a very useful tool for helping with word count reduction and structuring. And then also they were trying to use it for some of the analysis. So the pestle like I mentioned and the Swot analysis and some of the other academic frameworks that they were trying to apply to their business idea.

So in terms of the results, well, it certainly achieved the primary aim, which was all those accusations of cheating, obviously disappeared because we just we turned that whole question around. We were a little bit nervous initially about, you know, is this going to inflate our grades or indeed is it going to make our grades a lot worse? So it was a little bit of a leap of faith from that perspective. And I think we did very heavy moderation for that reason. I probably - certainly heavier moderation than I've done in previous years. And we're really confident that the grades generally stayed about the same, well certainly in line with last year and sort of the last - if you go back the last 4 or 5 years since we've been doing this module. So it didn't have a particular impact on the grades. That may be a good thing or a bad thing. I don't, I don't know. But it certainly didn't dilute or overly inflate the quality of what the, what students were doing. As I said, the students were a little bit nervous to use it at first. And in fact, some students really, I would say two students very heavily pushed back on it - to say, we don't think this is right. You know, we think this is encouraging cheating. This isn't what we're at university for. And that also was a really useful conversation as a really important conversation to have out in the open, because then we were able to share, you know, some of the stats about what employers are looking for, for example, and, important examples, I guess, to kind of explain the rationale.
Students anecdotally, they did say that they have that they felt that it had helped them with workload. I think I'm not sure about whether - we haven't done any genuine testing to, to know, has it really reduced their workload or not? I think for the students it's more of a perception of reduction rather than a genuine reduction in their workload. But that's - we need to do a bit more sort of proper testing on that, really. Um, interestingly, as well, I talked about the workshop that we did around skills training. Students, the, um, the feedback from that was generally pretty average and perhaps on the negative side, and there are quite a lot of comments from students saying we don't feel like we need this training. We're using this all the time, you know, how dare you kind of assume that we don't know how to use this when this is just a standard tool? And yet, when we look at the feedback from the tutors and we've got a team of six markers, um, on, on this, uh, on the business plans, um, of the 60 essays that came in, the 60 x 4,000 word business plans, there was the - the engagement with the GenAI tool was actually very limited. And we were actually quite disappointed with the evidence that was provided. There was a lot of students that were - they'd used the AI, but when you looked at the prompts that they were using, quite often, they were taking the first or the second response from the, from the, from the bot rather than really engaging with it critically. And so that meant we provided lots of feedback to the students on that. But I think that just goes to show, even though students don't recognise that they need that skill support, we as academics can see that they absolutely do need that skill support. And um, in the same way that we know as a university, we have to provide, you know, learning opportunities around things like critical writing and critical reading. Of course, it's exactly the same when it comes to using a tool critically. So regardless of, perhaps part of what we need to do is help students understand why critical thinking in relation to ChatGPT or whatever tool they're using is really important. But also really go in depth on those activities. And so, one of my key lessons learned is to sort of rewrite that workshop and really showcase more clearly, I think, why it's really important and the pitfalls that we can already see they are falling into.

So try to think about how this might be relevant to other modules, because whilst it was really relevant for my module as an experiential module, it was really relevant to give free rein of AI to our students. Of course, we wouldn't want every module in every program to be to be doing that. Every module is going to have a slightly different flavour of what is appropriate usage of AI. Um, so I think my starting point, I think, it is actually there sort of final point on this slide, which is, is to write your assessment using AI. So I didn't really understand AI. And I'm still learning to understand it, but I didn't really understand how it was going to be used on my module until I spent a couple of days trying to write my assessment. Not the assessment brief, but trying to actually respond, as a student would, writing the assessment using AI. And I was relentless, I wouldn't stop. I spent hours and hours and hours doing it, and it couldn't get it right. But I just kept on trying until just to get the very, very best possible results. So it would have been really easy for me to give up after five minutes and say, with confidence, oh AI can't write a very good business plan. You know, we're just not going to use it. But actually what I found from that exercise was it gave me a really clear idea of where it where it could actually be useful, where I could harness AI in my assessment, but also where the pitfalls would be and where it's just actually not very good, and where the human interaction and that critical engagement is absolutely vital to success. So I think that would be my first piece of advice to any module leader, whether you are pro AI or very, very against AI for whatever reason and both those three points are valid, I think, you know, we're all, I think wrestling with how to protect academic integrity whilst also trying to harness the benefits of AI and wherever you are on that spectrum, just try writing your assessment using AI and really, really try and keep trying. And then I think that will give you a much clearer idea of where to then take it for your particular module.

I think for me, what I found it is that the GenAI tools are, generally kind of style over substance. So I think they're really, really good for structuring, and drafting and styling, like I said before. But for me, I found it less good at the sort of deep thinking and the analysis - that I'm aware, though, is changing. So there are tools coming out all the time. And, you know, they're advancing. And I think the analysis and the deep thinking I think will get better. I think it is already getting better. So I need to follow my own advice and try and write my assessment again this year, knowing that it's almost a year since I did it last. And then I think, yeah, GenAI for me, I mean, I see lots of lots of people starting to say, you know, let's why don't we consider Gen AI as one of our team or, you know, you can think of it as an assistant or you can think of it as an intern. And I'm actually quite against personifying GenAI. To me, it's not a person, it is a tool. And we are in charge of the tool. And the tool is as good as the critical... the level of criticality that we that we engage with it through. So it's one of the tools just like the other tools that they might use, you know, databases, journals, other literature, great literature, political reports, and so on. It's one of those tools, that they can and should use, I think. And then I think, as I sort of said before, um, consider the skills around GenAI as just an extension of the critical thinking skills that we already really emphasise throughout all of our teaching and learning practices. But that brings me round full circle to say - to expect our students to critically engage with it, we need to engage critically with ourselves, and therefore we need to keep on working with it and keeping on keeping on trying what the tool, what the what the various tools do.