GovTech Spotlight: Alyssa Marie Loo — Intern, Product Manager and PSC Scholar
You might have heard of this year’s President’s Scholarship recipient, Alyssa Marie Loo Li Ann, and how she hopes to one day create a Singlish-speaking chatbot that will help make government chatbots accessible to a wider pool of audience.
Perhaps less widely known is that she has also been doing an internship at GovTech as a product manager since Sept 14, 2020. With her school term starting in January (she’s heading to Brown University), we caught up with her to reflect on her time at GovTech and provide her perspectives on some misconceptions about working in tech.
Alyssa worked on VICA while interning at GovTech. PHOTO: GOVTECH
What are you working on in your internship?
Quite a lot of things! I’m serving as the internal product manager for our VICA team, so I manage the Agile user stories and resolve any debates that come up about product features and feature behaviour. I’m also doing research about how far our chatbots can support Singaporean English expressions, managing partner engagements for our future features in our roadmap, and doing some research into AI Ethics guidelines for our chatbot.
What do you hope to achieve in your time at GovTech?
VICA 1.0, released this month (December 2020), is something I feel really happy about, having supported the team in internal product management for 3 months at this point.
I learned a lot about how B2B engagement works, how a full stack development team operates, and about Agile and Scrum methodology. I also learned data science skills and basic programming; I’m more confident executing some data science tests for research now.
While I’m leaving in end-Dec, I hope that my research will become important guidance when the team embarks on Singaporean English support and possible CCAI development in 2021-2022. I also hope to put out a set of AI Ethics Guidelines at least for our VICA team and our business users, so as to contribute to the foundation for responsible AI usage in our public sector.
What are some misconceptions of GovTech you had initially?
I thought that everyone had to be a hardcore programmer to work anywhere in tech, and I thought that to work in GovTech, you had to be some real programming prodigy who could do sorcery with computers.
It’s really not true! I learned that there are many roles in tech which don’t need you to specialise in the nuts and bolts of programming, (though certainly, you need to have a general understanding).
Client engagement, product management and project management are all very important roles in development that I didn’t really know about until I joined GovTech.
The computer sorcerer myth has also been completely debunked. I realise that a lot of the time, developers solve small problems as they go and learn things as they go. Eventually, a product takes shape, and it seems to work like magic – but having been on the development team, I realised it’s just a lot of hard work, incremental problem-solving, and many intelligent Google searches.
What did school not prepare you for when you stepped into this role?
Definitely the uncertainty of the kinds of problems I’m thrown into. In school, you’re given clear project requirements and assessment rubrics, but in real life, you’re given an open, ambiguous question, and you have to figure out how to best solve it.
It was paralysing at first to figure out what approach that I should take, but I’ve come to really love the environment of innovation and challenge. I’ve also learned that I need to look past perfectionism when putting out work.
The concept of minimum viable product and iteration that is championed in a tech team has been really helpful in guiding my thinking: it’s better to try something and then improve on it rather than get blocked indefinitely on an insistence to make something perfect. My team’s ‘fail fast, learn fast’ culture has also supported a safe space to experiment and explore.
What advice would you have for future interns in GovTech?
Definitely be ready to take ownership of the projects you are assigned: this means taking charge of self-set deadlines, asking for help when you need it, and figuring out a way to do it on your own. GovTechies are marvellously helpful, but they don’t helicopter or micromanage; they will always be willing to help, but you need to know how to start asking for it.
What are your thoughts on the use of AI in Singapore’s Government?
It can be a very powerful tool to help improve the quality of access to government services. It’s a common issue that there are so many schemes with complicated policy wordings to pore through; it’s all very daunting to have to understand by yourself. On the government side, we also have manpower limitations on the number of public service staff we can devote to answering citizen queries about present policies.
A clear use case for AI in our immediate future is hence to help mediate that information gap: automatically suggesting policies relevant to someone in a particular stage of life (for instance, introducing Baby Bonus policies to someone who has just had a child), intelligently summarising policy documents so it’s easier to read for an initial understanding, powering intelligent virtual assistants that can rapidly answer basic queries and resolve administrative requests.
These save time and mental energy for citizens and government staff alike.
What are your thoughts on AI Ethics/Regulation/Governance?
The more developed AI is and the more deeply it is integrated with government services, there is definitely the issue of accountability for the AI models.
For example, any degree of personalisation requires an AI to make assumptions about a person’s profile based on their data. This can certainly be helpful in pushing information that is targeted to a citizen’s needs, but it can also seem intrusive if the AI (and, by extension, the government) seems to know too much. It can also be harmful if these assumptions wrongly restrict access to services – and in these cases, who would be accountable for the error of the model?
To mitigate this risk, every step in the design of an AI application needs human supervision or intervention in the process. In more innocuous cases, to avoid the AI omitting important information in an automatic summary, for example, this can be a UX question – to nudge the reader to remember that the summary is only a rough cut and allow for them to easily access the full text. In more serious cases, such as where AI is used in influencing access to services, a human-over-the-loop system is needed: a human agent must be the one who reviews the information and makes final decisions.
I think it’s great that the Singapore government is a public sector thought leader in producing AI Governance Frameworks and presenting them to the world – as we have at both 2019 and 2020 WEFs.
These were primarily targeted towards the private sector. Now, as many AI applications are beginning to take shape in our public sector, I hope our governance frameworks will be applied internally as well. It’s an immensely exciting time for the development of our citizen services, and it must be navigated responsibly.
We see you intend to study linguistics in uni; that might seem like an unconventional choice for someone joining a tech company. Can you elaborate?
I think linguistics is also relevant to tech, particularly for the chatbot team I am working with. In my research on how to use computational linguistics to support Singaporean English, I bring a new perspective to the team, looking at use cases for language support from a linguistics angle rather than a computational one. I wanted to join GovTech because I wanted to try something outside my comfort zone and experience what a tech company environment is like. I had friends who had previously interned at GovTech before, and they were all fans of the work culture, calling it a public agency ‘startup’.