Author: Austin Hunter
Edited: April 9, 2025
Published: March 10, 2023
NOTE
This is an archived post from my old blog detailing how I built it. My new blog, the one you’re reading this on, is built with Astro. I’ll likely write another post on building the new blog sometime in the near future.
Welcome to my blog! I have been thinking about making this blog for a long time, but only recently made the decision to go through with developing it. I built this blog from the bottom up, it’s a full stack web application built with Go and MySQL on the backend, and React on the frontend. While I have some experience building static websites as a freelancer and writing software as a hobbyist, this project is the first web application that I built with the goal of using mostly standard libraries rather than relying on too many external dependencies and abstractions. I did deviate from this goal occasionally, mostly on the frontend where I used a bit more boilerplate and built some stuff that relies pretty heavily on external dependencies. The goal of this project was not to create a perfect web application, but to learn and hone my skills as a developer while also building myself a platform that I can use to share my thoughts, ideas, and future projects.
In this post, I’ll be reflecting on my experience developing this web app. I’ve decided to break this reflection into sections to make it more readable.
Ultimately, my goal for this post is that it might be useful for those who are new to any of the technologies discussed here. I plan on writing more detailed tutorials in the future, but these will most likely be in the form of bite-sized projects that thoroughly cover a topic. Bigger tutorial projects or topics will be split into a series of these bite-sized tutorials. This post is not a tutorial. This post, at its core, is a (slightly one-sided) conversation about concepts. Now that I’ve bored you with some stuff about my goals and motivations, both for this introductory post and my blog, let’s get into the fun stuff.
./personal-blog
├── cmd
│ └── cmd.go
├── data
│ ├── models.go
│ ├── post-service-mysql.go
│ └── user-service-mysql.go
├── handlers
│ ├── auth.go
│ ├── middleware.go
│ ├── post.go
│ ├── static.go
│ └── user.go
└── main.go
The project is split between a few packages:
We’ll look at the data package first, as this is where all of the data structures and interfaces that enable the transfer of data are defined. This package serves as a source of truth for other packages when working with application data types, such as Posts or Users.
A blog doesn’t have too many data structures to think about, I came to the conclusion that at there were really only three that needed consideration: the Post model, the User model, and the Comment model. Each of these models are relatively simple, we’ll take a quick look at each of them and then talk about what can be done with them.
type Post struct {
ID int `db:"ID" json:"id"`
AuthorID int `db:"Author_ID" json:"authorID"`
Title string `db:"Title" json:"title"`
ImageUrl string `db:"Image_URL" json:"imageURL,omitempty"`
Content string `db:"Content" json:"content"`
Archived bool `db:"Archived" json:"archived,omitempty"`
UploadDate sql.NullTime `db:"Upload_Date" json:"uploadDate,omitempty"`
Slug string `db:"Slug" json:"slug,omitempty"`
}
As you can see, the Post model is fairly comprehensive, although I still might add some additional data to posts, including a “tags” field, but this model gets the job done for now. Go allows the use of tags alongside fields, in this case the tags are also useful for documenting how to interact with these data structures in a given context. The “db” tag corresponds to a column in a given table, while the “json” tag corresponds to what the key that the data stored in the field will be paired with when encoded to JSON. The user and comment models are constructed the same way.
type User struct {
ID int `db:"ID" json:"id"`
FirstName string `db:"First_Name" json:"firstName,omitempty"`
LastName string `db:"Last_Name" json:"lastName,omitempty"`
Email string `db:"Email" json:"email,omitempty"`
ProfilePicture string `db:"Profile_Picture" json:"profilePicture,omitempty"`
Admin bool `db:"Admin" json:"admin,omitempty"`
Password string `db:"Password" json:"password,omitempty"`
}
type Comment struct {
ID int `db:"ID" json:"id"`
PostID int `db:"Post_ID" json:"postID"`
AuthorID int `db:"Author_ID" json:"authorID"`
Content string `db:"Content" json:"content"`
}
With these models, there is one central source of truth about what exactly Posts, Users, and Comments are. Each of these data structures have corresponding interfaces PostService, UserService, and CommentService.
type UserService interface {
CreateUser(*User) error
GetUsers() ([]User, error)
GetUserByID(int) (User, error)
GetUserByEmail(string) (User, error)
UpdateUser(*User) error
DeleteUser(int) error
GetRecordCount() (int64, error)
}
type PostService interface {
GetPosts(int, int, bool) ([]Post, error)
GetPostById(int) (Post, error)
GetPostBySlug(string) (Post, error)
CreatePost(*Post) (int64, error)
UpdatePost(*Post) (int64, error)
DeletePost(int) error
GetRecordCount(bool) (int64, error)
}
You’ll notice, I haven’t defined the CommentService yet, you can read more about that in the [what’s next?](#what’s next) section. These interfaces define all of the necessary functionality to interact with persisted data. I also defined a data structure called DBService, which stores each of the services defined by the interfaces shown above.
type DBService struct {
PostStore PostService
UserStore UserService
}
This DBService acts like a service in the sense that it provides a strictly defined set of functions, however it is not an interface, instead it should be looked at as a composition of interfaces. All the necessary services should be available within an instance of DBService. This allows each layer of the software to be given distinct sets of responsibilities without concern for those of the other layers. This allows the code for the backend to more flexible. Suppose I decide that I want to use a NoSQL database instead of MySQL, if the responsibilities were not separated by the use of interfaces then such a change would require overhauling most of the backend, but with the use of these interfaces all that would be required is creating new implementations of the interfaces using the correct database drivers.
Next, let’s look at the handlers package. Each API endpoint has an associated handler function. These handlers use the functionality provided by the data access layer to retrieve data from the database, pass it to data structures, and send the resulting data to frontend encoded as JSON data. I’m not going to go into too much detail here because I plan on writing a post about building web servers with Go fairly soon. If you’d like to see the functions for each endpoint you can look at the [github repository for the project]. Keeping the more complicated parts of the logic in a seperate handlers package allowed me to keep the initialization of the server clean and easy to read. The current main file, where the server is initialized, looks like this.
func main() {
//Database connection setup
mysqlU := os.Getenv("MYSQLUSER")
mysqlPass := os.Getenv("MYSQLPASSWORD")
mysqlHost := os.Getenv("MYSQLHOST")
mysqlPort := os.Getenv("MYSQLPORT")
mysqlDB := os.Getenv("MYSQLDATABASE")
fmt.Printf("Connecting to DB at: %s:%s@tcp(%s:%s)/%s?parseTime=true\n", mysqlU, mysqlPass, mysqlHost, mysqlPort, mysqlDB)
db, err := sql.Open("mysql", fmt.Sprintf("%s:%s@tcp(%s:%s)/%s?parseTime=true", mysqlU, mysqlPass, mysqlHost, mysqlPort, mysqlDB))
if err != nil {
panic(err)
}
db.SetConnMaxLifetime(time.Minute * 3)
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(10)
err = db.Ping()
if err != nil {
panic(err)
}
defer db.Close()
//Data access services initialization
postStore := data.MysqlPostStore{DB: db}
userStore := data.MysqlUserStore{DB: db}
dbDisp := data.DBService{
PostStore: &postStore,
UserStore: &userStore,
}
//Create a new super user using environment variables
createSuperUserEnv(&dbDisp)
//Server setup and API endpoint definition
mux := http.NewServeMux()
mux.Handle("/", handlers.StaticHandler(http.FileServer(http.Dir("build/")), "./build/"))
mux.Handle("/api/posts", handlers.PopulatePosts(&dbDisp))
mux.Handle("/api/posts/", handlers.GetPost(&dbDisp))
mux.Handle("/api/posts/create", handlers.AddPost(&dbDisp))
mux.Handle("/api/users/signup", handlers.SignUpHandler(&dbDisp))
mux.Handle("/api/users/signin", handlers.SignInHandler(&dbDisp))
mux.Handle("/api/users/authtest", handlers.AuthTestHandler(&dbDisp))
mux.Handle("/api/admin/posts", handlers.GetAllPosts(&dbDisp))
mux.Handle("/api/posts/update", handlers.UpdatePost(&dbDisp))
var port string
if os.Getenv("PORT") != "" {
port = fmt.Sprintf("0.0.0.0:%s", os.Getenv("PORT"))
} else {
port = ":8080"
}
//Start server
t, err := net.Listen("tcp", port)
if err != nil {
panic(err)
}
fmt.Printf("Listening on port %s\n", port)
if err := http.Serve(t, mux); err != nil {
fmt.Printf("err: %v\n", err)
}
}
The frontend is built with react, I’ll be writing a more detailed post on building the client side of this application in the future. The frontend uses client-side routing, this has to be accounted for when handling requests coming from the root path. Thankfully, accounting for this problem is fairly straightforward.
func StaticHandler(fs http.Handler, dir string) http.Handler {
fn := func(w http.ResponseWriter, req *http.Request) {
//Check if the path is at the root
if req.URL.Path != "/" {
//If the path is not at the root, check if it corresponds to an existing resource
fPath := dir + strings.TrimPrefix(path.Clean(req.URL.Path), "/")
_, err := os.Stat(fPath)
if err != nil {
if !os.IsNotExist(err) {
fmt.Printf("err: %v\n", err)
return
}
//If the path does not correspond to an existing resource, set the path to the root path before handling
req.URL.Path = "/"
}
}
fs.ServeHTTP(w, req)
}
return http.HandlerFunc(fn)
}
This handler prevents 404 errors when a user refreshes the frontend on routes other than ”/”. It also prevents a 404 error when a user follows a link to a path other than ”/”.
JSON Web Tokens (JWT) are used for authentication. I wrote a couple of helper functions to simplify the process of getting and parsing tokens using the golang-jwt package.
//getNewToken generates a new jwt.Token and returns a pointer to it.
func getNewToken(u *data.User) *jwt.Token {
claims := &claims{
u.Admin,
jwt.RegisteredClaims{
ExpiresAt: jwt.NewNumericDate(time.Now().Add(2 * time.Hour)),
IssuedAt: jwt.NewNumericDate(time.Now()),
Issuer: os.Getenv("HOST_NAME"),
},
}
return jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
}
// ParseToken parses a token string and returns a pointer to the decoded token. Uses the custom claims defined in the jwtClaims struct.
// The token string must use the HMAC signing method.
func parseToken(tStr string) *jwt.Token {
pt, err := jwt.ParseWithClaims(tStr, &claims{}, func(tk *jwt.Token) (interface{}, error) {
if _, ok := tk.Method.(*jwt.SigningMethodHMAC); !ok {
return nil, fmt.Errorf("unexpected signing method: %v", tk.Header["alg"])
}
return []byte(os.Getenv("SECRET_KEY")), nil
})
if err != nil {
fmt.Printf("err: %v\n", err)
}
return pt
}
These functions make up most of the logic behind authentication. Currently, authentication is stateless, meaning that each attempt to access content that requires authentication requires the token to be present in the http request headers.
The signin handler will create a new JWT token and send it to the client if the signin is successful. The logic for signing in is straightforward, a request is received from the client containing an email and password, the password from the client is hashed and then compared to the stored hashed password associated with that email.
//read request body
reqBody, err := ioutil.ReadAll(req.Body)
if err != nil {
fmt.Printf("err: %v\n", err)
}
//Unmarshal request body to a new User
var u data.User
err = json.Unmarshal(reqBody, &u)
if err != nil {
fmt.Printf("err: %v\n", err)
}
//get the user associated the email sent in the request
uDB, err := db.UserStore.GetUserByEmail(u.Email)
if err != nil {
fmt.Printf("err: %v\n", err)
}
//Check for matching passwords.
//comparePasswords hashes the plaintext password that is passed as the first parameter.
m := comparePasswords(&u, uDB.Password)
From here, if the hashed passwords match, we send a new token to the client
t := getNewToken(&uDB)
if _, ok := t.Method.(*jwt.SigningMethodHMAC); !ok {
res = authResponse{
"ERR::BAD TOKEN",
"",
"Something went wrong.",
}
err = j.Encode(res)
if err != nil {
fmt.Printf("err: %v\n", err)
}
}
ts, err := t.SignedString([]byte(os.Getenv("SECRET_KEY")))
if err != nil {
fmt.Printf("err: %v\n", err)
}
w.Header().Set("Authorization", ts)
w.Header().Set("Access-Control-Expose-Headers", "Authorization, Uid")
w.Header().Set("Uid", fmt.Sprintf("%d", uDB.ID))
return
Note: The approach to error handling shown in the code above is not ideal, I am planning on implementing more robust error handling for the handlers package.
Currently, this web app is hosted on Railway. The application is containerized using Docker and built in stages, first building the frontend, then the Go executable, and then copying both to a final docker image and starting the server. This is the full Docker file
FROM node:alpine AS client_build
ARG PORT
ARG RAILWAY_STATIC_URL
ENV REACT_APP_API_URL=/api/
WORKDIR /client/
COPY ./client ./
RUN yarn install && yarn build
FROM golang:alpine AS server_build
ARG PORT
RUN apk --no-cache add gcc g++ make git
WORKDIR /go/src/app
COPY ./cmd .
RUN go mod tidy
RUN go build -o ./bin/blog-backend
FROM alpine:latest
ARG PORT
RUN apk --no-cache add ca-certificates bash
WORKDIR /root/
COPY --from=client_build /client/build ./build/
COPY --from=server_build /go/src/app/bin/blog-backend .
EXPOSE ${PORT}
ENTRYPOINT ["./blog-backend"]
Railway pulls from the Github repo containing this project and will look for a Dockerfile in the repo. Once it identifies the Dockerfile, it will build it and host the final image. I plan on making a more detailed post about using Railway in the future, but you can check out the Railway docs if you want to learn more now.
While I am not completely new to Go, React, or MySQL, this project is the first time I have put all of these components together to make a fullstack application. As I worked on this project, I got a much better feel for working with Go database drivers, static file hosting using the net/http package, and the process of containerizing and deploying a full stack application. I have a much better understanding of some of Go’s standard library packages after using them to implement the functionality for this project. This project also gave me the opportunity to learn more about Docker, particularly multi-stage builds and how data from one stage can be shared with subsequent stages.
I have a few plans for this project, and you will likely see posts about the process of implementing them in the near future. First, I plan on writing tests for the handlers and data packages, most likely using Testify. The other big changes I’d like to make to the handlers package is implementing an error handling framework, which will essentially consist of a common error data structure used to build informative errors that can easily be logged and/or sent to the client.
Another major change I’d like to make is adding support for comments. I have defined the Comment model, but at the moment that is the extent of the implementation of anything related to comments. I am hoping to add this feature in the very near future and will likely write a post detailing the process. I’ll also be making some changes to the frontend, but that will be covered in a separate post.
I hope that those of you who made it this far found this post informative, interesting, and maybe even a bit entertaining. Thanks for reading!