No update was provided for the week ending 2025-06-04.
Arjun:
Milestone: Complete the Current Project and get the first draft
Original Date: May 30, Target Date: Jun 10
Change in Date:
The project was in good shape till the evaluation method didn’t work for our use case. This stopped the project’s progress for weeks.
Action Items completed.
These constraints are helpful guidelines for identifying invalid instances in the generated dataset.
They would capture the relationship between Gender and Relationship (male-husband) (Adult dataset), but fail to capture aspects of the dataset, such as correlations, like individuals with higher age having lower physical health (MEPS dataset).
Results/Findings:
Finding the domain constraints for each dataset and using them as filters, and resolving some minor issues during the process.
Issues:
Replies:
Dr. Lei:
2, Results/Findings should be what you have accomplished, not what you’ve been working on.
Arjun:
Updated the status with more details.
No update was provided for the week ending 2025-05-21.
Arjun:
Hello Group,
Milestone: ASE2025 (May 30)
Action Items:
I have been working on the problem of trying to find the probability of an instance when the data distribution is known.
Results/Findings:
The problem seems straightforward. Our use case is such that we expect that the counterfactuals ( from the fader network ) will be more realistic compared to the instances obtained by simply flipping the sensitive attribute. I have implemented a couple of techniques, namely, conditionalVAE, GMM, and Extended Isolation Forest (tuning it). However, these methods are either imprecise to work with, brittle, or cannot show that our instances are better than the instances that are simply flipped.
Issues/Questions/Concerns:
The current issue is that the methods used haven’t been suitable ( or fine-tuned enough ) to separate the two classes of instances. One of the problems during the process is that we are working with mixed variables, and it’s hard to find methods that work well for mixed variables ( in one dataset, the categorical variables dominate, in another, the continuous variables dominate, which makes tuning the models or modeling the data harder).
I will look into the experiments done till now and see if I could do anything better, gather my thoughts around the assumptions and experiments done in the last few days, and have a discussion in today’s meeting. Please allocate 45 minutes for me.
Replies:
Dr. Lei:
this is indeed the biggest hurdle right now. did you try constraint-based evaluation? it will evaluate whether the flipped instance is possible or not. it will not replace probability-based evaluation, but can be used as one way to show the advantage of our approach, if you can get good results there.
Dr. Lei:
i suggest you look back at all the different approaches you have tried, and choose one that you feel most confident and then do hyperparameter tuning.
Arjun:
Hello Group,
Milestone: ASE2025 (May 30)
Action Items:
Target Date: May 9 [ Hard deadline ]
Results/Findings:
I have done an extensive search on the existing literature regarding counterfactual fairness. I have documented it in a file; however haven’t gone through all of the literature listed.
I ran a couple of experiments to find out if I could decrease the overall time of the experiment. The search over [h-3, h+3] where h is half the number of input dimensions, is very extensive, I think it could be limited to [h-2,h+3] or [h-1, h+3], since the lower dimensions do not fit the data well ( the ATN is lower for lower dimensions, especially if it’s 2 or 3). For the credit/compas dataset, the time taken is in a few hours, but for adults, the time taken is more than a few hours, given the size of the dataset. This excludes the time taken by the fader network ( which is relatively less ).
The existing literature just changes gender/race without considering changes in other attributes, which doesn’t change the ATN of the invalid instances sufficiently [ 0.91 to 0.97 compared to the original instances, ATN changes in the range [0.91, 0.97] for three datasets when gender/race is changed ]. This doesn’t seem to be good metric to prove the invalidity of flipped instances. There seems to be couple of works to filter out invalid instances that can be used in cases like ours, I am looking into it.
Issues/Questions/Concerns:
I was looking to combine loss with total correlation to find suitable latent dimension ( VAE with suitable latent dimension ) but there doesn’t seem to be a straight forward way to combine them. It is often observed that the total correlation is lowest ( highest independence ) is in the lowest latent dimension ( latent_dimension = 2 or 3 ) in the dataset, but lowest loss is in some other latent dimension. There doesn’t seem to be a straight forward relationship to combine them to get one of the best models ( one that will result in samples that will have sufficiently high ATN ). I calculated the ATN for all the latent dimensions in range [h-3, h+3], I looked in to ATN, total correlation, loss in the range but couldn’t come up with a way to combine them or use them to find one of the best models.
Our results indicate ~5-10% of the discriminatory instances, however existing methods report ~50% of discriminatory instances. I am looking for a way to remove the false positives in the cases, but I amn’t quite sure if we could cut down the number of discriminatory instances to <5%.
I will be providing daily status updates at the end of the day from today till the first draft.
Replies:
Dr. Lei:
Arjun:
Reply to (1) and (2), ATN has two components - one that measures the individual column’s distribution and another that measures a column’s distribution with respect to the other columns. If the original dataset has 800 Males and 200 Females, the flipped dataset will have 200 Males and 800 Females, this would change both of these components ( the individual column’s distribution would be affected much more ). However, this doesn’t affect the overall score by a large margin as we have ( around, 6 - 13 ) columns in each dataset whose column distribution doesn’t differ and relation with other columns doesn’t change ( as long as gender isn’t in that group ).
The metric itself seems to be very relaxed as it isn’t reduced by a large margin despite the flipping resulting in data distribution that doesn’t belong to the real world. Something like constraint based approach which is much better in capturing the relationship is better in that context.
I will share the findings after looking into it.
Arjun:
Milestone: ASE2025 (May 30)
Action Items:
Target Date: May 2
Results/Findings:
In the first set of experiments, I was focusing on wrong metric to evaulate the issue ( discriminator accuracy instead of individual class recalls ).
Since I found individual recalls are what I should focus on, I tried to give more weightage to the mispredictions by the discriminator on the minority classes. This step improved the results on recalls. However, balancing the dataset with minority classes gave better results with significant increase in the number of instances. Example: a train dataset with 100 instances in a minority group and 900 instances in a majority group will have 1800 instances in this balanced set, which is very computationally expensive.
Issues/Questions/Concerns:
The issues are two fold. In this network, the encoder-decoder architecture takes huge responsibility of reconstruction (and hiding the sensitive attribute) for 8, 10, 20 attributes (which when processed becomes 18, 53, 101 processed attributes). The classifier works on relatively smaller task ( guessing a single attribute (age) or at maximum 6 classes (race) from at relatively smaller latent dimensions (max: 10 input features)), this makes it quite difficult to select the strength of each of these components. Furthermore, the adversarial signal is quite small compared to the reconstruction loss to effectively hide the sensitive attribute. I have tried to increase it by giving higher weightage (statically/dynamically). Giving higher weightage could impact the reconstruction ( and increase number of invalid instances ) if pushed beyond a limit. I still haven’t figured out a suitable way to tune this weight or adjust it according to different epochs throughout the training to reduce it by the end of the training. My current concern is that this component (adversarial_loss) is almost static (or sometimes increases slightly) throughout the training process.
I will shift my focus on paper writing and come back later later to do the experiments for next couple of weeks.
I will be presenting a new idea today and sharing the schedule for the milestone on my channel by the end of the day.
Replies:
Dr. Lei:
very well-written. a few questions:
Arjun:
–> Recall is the ratio of true positives out of the total true labels in the dataset (in this case for values of gender/race). The datasets we are working with have one value that dominates the other (80% of instances could have Male, and only 20% of instances could have Female for Gender ). In such cases, the classifier could have an accuracy of 50%. However, it doesn’t paint an accurate picture of the accuracy of the classifier on prediction for each of these values of gender - it could have guessed 95% of the latent features to be males (since, majority of data has male) and 5% of the latent features to be females. However, it should have guessed 50% of predictions for male out of all the correct male labels to be male and 50% of predictions for female out of all correct female labels to be female, which paints a more accurate picture. Let’s expand this to the case of race, where there are 5 values for race (say, 50% of the instances are dominated by the privileged race, and the rest of the 50% is from the other 4 values of race). In that case, the classifier could simply guess privileged race all the time and still get 50% accuracy ( primarily because of the mispredictions for minority races ).
–> This is an example of a dataset with 1000 instances. However, in dataset like adult, the races could be: {White : 20747, Black : 2258, Asian-Pac-Islander : 709, Amer-Indian-Eskimo : 222, Other : 193}. The total number of rows for training data are 24129. If we were to balance this dataset for minority race values, then the number of samples would be (20747 *5) = 103,735, which is a very large number.
what do you mean by “the strength of each of the these components”? is it the number of latent dimensions?
–> In this case, the term “strength” of each of these components mean the architecture ( primarily, how do I select one architecture and select the other relative to the first one). For example, I can try to choose the architecture ( number of layers/neurons) of the encoder-decoder network and then select a classifier ( which works on relatively simpler task ) - the caveat is that for predicting race ( which has 5 or 6 labels, we need a stronger classifier compared to predicting the gender ) and for predicting age ( which is a single value ), we need relatively weaker discriminator/regressor.
just one suggestion, if the number of processed attributs is high, and if this is due to one-hot encoding, you can probably use learned embedding for the categorical values instead of one-hot encoding. the learned embedding can be of much lower dimension. try to read on how to use learned embedding on categorical values.
–> currently, the reconstruction loss decreasing very smoothly and doesn’t seem to be a concern. Thus, the dimension of the data ( one-hot encoded ) doesn’t seem to be the issue, the issue is with the adversarial signal, which seems to be weak and doesn’t change throughout the training epoch.
did you try a very big weight for the adversarial signal? i don’t see any reason why advesarial loss has to be static. if it has a big weight, then the learning algorithm will try to focus on minimizing it instead of just trying to minimize the reconstruction loss. if you give a very big weight to adversarial loss, and it still does not reduce, then it is time to consider whether the discriminator network is powerful enough, and if not, you want to increase its capacity by increasing the number of neurons or layers. i don’t think adversarial loss is that different from reconstruction loss, from the optimization perspective.
–> I tried different weights for the adversarial signal.
First, I tried static weights ( 2, 4, 6, 8, 10 ). After first 10 epochs or so, the reconstruction loss ( which was 10X or 25X the adversarial loss ), has similar value as that of adversarial loss (1.2, 1.2 ), now the static weightage applied to the adversarial signal forces the encoder to hide the sensitive attribute. In this case, if the static weight is 10, then network starts to change other attributes as well ( thus, resulting in more invalid instances - this decreases the adversarial ) compared to a static weight of 2 (which results in less number of invalid instances ).
I tried to scale the adversarial weightage dynamically, i.e. linearly increase the weightage upto certain extend ( say, upto 2 ) and then cap it ( it cannot increase any further ). This seems to have similar effect as having static weightage of 2.
I would expect the adversarial loss to decrease after a certain number of epochs to reach equilibria, however, it doesn’t happen to be the case.
Giving higher weightage to adversarial component decreases the adversarial loss ( epoch 0: 1.4 –> epoch last: 1.2 ), at the cost of having more invalid instances.
Arjun:
Hi Group,
Milestone: May 20, ASE
Action Items:
I have been following up on the action items listed in the past meeting, namely:
a. Change the hyperparameters to see the results.
b. See valid test cases ( and relax constraints ).
c. Hyperparameter tuning for each dataset
d. Target a conference [ ASE ] | Deadline: May 20 |
While performing the action items, I faced the issue of organizing the results and interpreting them over different hyperparameter configurations.
I spent a good amount of time refactoring the existing code and am working on the action items.
Here are my upcoming action items:
Update the results from (a), (b), (c) | Apr 23, Wednesday [ by evening ] |
List the papers that could be compared with my current work | Apr 25, Friday |
Replies:
Dr. Lei:
sounds good. now it is time to finalize the schedule for making submissions. have a list of to-dos, and also target dates for each one, and try to make each one on time. also have an internal deadline at least two weeks before the real one.
Arjun:
Hello Everyone,
Action items completed:
After a while of tinkering, I am able to reduce the change in unintended sensitive attributes to a much smaller percentage. The number is even lower if I don’t consider the change within the unintended sensitive attribute ( for example, within unprivileged races ). However, in the Credit dataset, the change is still high.
Today, I will present the results: ATN, change in other sensitive attributes, and number of fairness violations. If the results are good enough, we can move forward with the current results.
Replies:
Dr. Lei:
great to hear. look forward to hearing what you did to improve the results during friday’s meeting. if i recall correctly, now you did three datasets? we need to have more datasets and also perhaps more models. in general, reviewers want to have diversity in datasets and models. this can also help verify the validity of our approach.
Arjun:
Sure, Dr. Lei. I will work on it.
Arjun:
Hello Everyone,
I followed up on the action items discussed last Friday - primarily understanding the adversarial training process, and verifying the implement.
I made few improvements on the existing implementation. However, the accuracy/RMSE of the classifier/regressor of the adversarial process is almost constant after first few epochs. I am training the adversarial objective with higher weightage to see if the results can be improved.
I will take 20 mins to present the results & the issues in today’s meeting.
Replies:
Dr. Lei:
it is normal to have a technical obstacle during implementation. try to think about what is the root cause, and then what you can do about it. also try to be systematic. one suggestion is that you may want to have some hypotheses and then design some small experiments to verify them, one at a time.
Arjun:
Hello Everyone,
Milestone: Apr 8 ( Resolve the issues in the current implementation & Complete with the necessary dataset )
I have been working on the current project for a while now.
Last week I presented the issues I was currently facing in the project related to flipping of multiple sensitive attributes when only one is conditioned on the fader network. I have been investigating the problem, however, I have yet to understand the issue completely.
Additionally, I have presented on the IWCT2025, the preparation for which took a reasonable amount of time.
Replies:
Dr. Lei:
what is this issue? is this the one you discussed with me on friday afternoon or something new?
Arjun:
The issue is the one I discussed on Friday Dr. Lei.
Arjun:
Hi Group,
This week, I have been working on the implementation of my current work on Fairness Testing. I have some initial results however, there are some issues I am facing. Once I have more results, I will update my Slack channel with them.
I have a new idea presentation tomorrow, where I will be presenting this work:
Chen, Canyu, et al. “When fairness meets privacy: Fair classification with semi-private sensitive attributes.” arXiv preprint arXiv:2207.08336 (2022).
Arjun:
Hello Everyone,
Milestone: Complete the current project and have a draft ( ~ Apr 2 )
I was traveling during the spring break.
After my return, I worked on the existing implementation. I have nearly completed the implementation for the current project.
I will be sharing the results of end-to-end implementation in the meeting or by the end of the day.
I will be updating the status of the progress every day at the end of the day till the completion of the project.
Replies:
Dr. Lei:
make a commitment by first removing ~ in the target date
Arjun:
Hello Everyone,
This week I created the artifacts for IWCT 2025. I also carried out the literature review for the paper I am currently working on.
Currently, I am still focused on implementation of the current project I am working on.
I am lagging behind in the project, I will update status of daily progress everyday till the completion of the first draft in my channel.
Replies:
Dr. Lei:
try to be iterative. try to have a baseline version first and then improve upon it.
Arjun:
Hello Everyone,
This week, I focused on the problem of changing the sensitive attribute of tabular datasets, for which I studied a couple of possible techniques ( along with their pros and cons ) and presented them. Furthermore, I worked on the comments from the previous paper IWCT and made the source code ready to work in a virtual environment along with the necessary document that instructs how to run the code. I have revisited the implementation of the Fader networks.
I am looking into the papers that I could compare our work with regarding counterfactual fairness.
Replies:
Dr. Lei:
sounds good. i just sent you some comments on the iwct paper from nist.
Arjun:
Hello Group,
Milestone: Complete the undergoing project ( ~ Mar 4 ).
Action Item Completed:
I will present some results today regarding the flipping of the sensitive attribute and take action based on today’s discussion.
Replies:
Dr. Lei:
sounds good. i suggest you target ICSE which has the submission deadline in about one week
Arjun:
Hello Everyone,
This week, I have been reading some literature for my use case, flipping sensitive attributes for fairness testing respecting domain constraints.
I found this work to be closely related to my work:
Lample, Guillaume, et al. “Fader networks: Manipulating images by sliding attributes.” Advances in neural information processing systems 30 (2017). I am looking forward to a brief discussion tomorrow.
Replies:
Dr. Lei:
try to make your report more informative. what are the major findings, and what you consider this paper to be related.
Arjun:
Hello Everyone,
Milestone: Feb 7 ( Baseline ): Includes end-to-end implementation of two types of VAE models
Action Items completed from the past week:
Conditional VAE was used similarly to how a conditional VAE is used for image data, and the ATN found was ~0.75 for the adult income dataset.
The current implementation needs to be improved for end-to-end results.
I will present tomorrow for the New Idea session.
Replies:
Dr. Lei:
what do you mean by end-to-end implementations? does it mean end-to-end for fairness testing? in the following days, i suggest you focus on the topic of how to flip sensitive attributes. first try to find all the existing work on this problem or related to this problem. second try to develop an algorithm to do the flipping. ask the question, what type of fairness would your algorithm/end-to-end approach test? third try to have an initial implementation to validate the algorithm.
Arjun:
Hello Everyone,
Milestone: Feb 7 ( Baseline )
Implement a basic version of the project end-to-end for one or more datasets ( DisentangledVAE - ConditionalVAE ) and get the results | Initial Verification |
The end-to-end implementation is carried out to understand the project better and find the potential issues during each step.
Action Items completed this week:
Action Items for next week:
Replies:
Dr. Lei:
good results on the use of disentangled VAE. i would suggest an incremental approach. now you have some initial confidence on disentangled VAE. try to move onto conditional VAE on how to flip the sensitive attributes. try to think deep, in terms of what are the fundamental issues in this problem, what are the possible approaches to address the problem, what approaches you want to take and why, in terms of its pros/cons.
Arjun:
Hello Everyone,
I had a paper submission on Jan 10. I worked on the paper submission before the holidays.
For the past week, I revisited the paper for the necessary fixes mentioned by Dr. Lei. Furthermore, I reflected on this project I had undertaken since Spring 2024. I have a few slides to share regarding this project and paper submission. I want to book 30 mins of the time for the presentation.
New Action Items: I am looking forward to a new project regarding using Disentangled VAE and Conditional VAE for fairness testing.
Replies:
Dr. Lei:
congrats on your first submission. the two directions are indeed good ones to explore. i suggest you focus on the use of Disentangled VAE in the following week, since this is relatively clear in terms of what to do. that is, try to do some experiments to see if its use helps improve the naturalness.
Arjun:
Hello Everyone,
Milestone: Dec 20 | Internal Deadline for the paper Draft |
What is Completed?
Pipeline for the End-to-End Implementation.
What is pending?
Design Decisions, primarily use of the columns and how to flip the sensitive attribute.
Writeup for the Experiment, Results, and sections.
I propose two forms of processing for the dataset.
All the columns included in AI360 are used for the datasets (Adult, Compas, Credit)
I would like to sign up for a discussion tomorrow.
Replies:
Dr. Lei:
do you have a first draft of the three sections, Background, Approach, and Experiments?
Arjun:
Hi Group,
Milestone: Dec 20 ( Paper’s first Draft )
I have created on end-to-end implementation of the approach for one dataset ( Adult ) and obtained the results.
I will be getting results for other two datasets ( COMPAS, CREDIT ) by late afternoon to evening.
Summary of Results:
The t_way_samples are obtained from the latent space representation of the training instances.
The ATN was found to be higher when means of bin was used to generate t_way_samples. ( For t = 4, ATN: 0.775 )
Furthermore, using bin_means from discretized columns resulted in better results for the ratio of discriminatory instances compared to ratio of discriminatory instances from the “test set” of the adult income dataset.
I would like to signup for discussion for feedback on the results & further experiments.
Replies:
Dr. Lei:
sounds like good progress. also remember to discuss the issue of ground truth/discretization you mentioned during the meeting. please try to have an agenda and also what you expect to get out of the discussion, in terms of action items and decisions to make.
Arjun:
Hi All,
Milestone: Dec 24, Paper’s Internal Deadline
I have completed the following action items:
Using the mean value of the bin results in a slightly better ATN.
I will present briefly the results if the time permits.
I have an implementation of Conditional VAE using Pytorch. However, I am trying to integrate the implementation with the existing library for better results.
I will complete this last action item and write the approach section as soon as possible for the design decision.
Replies:
Dr. Lei:
great job. now it is time to start on paper writing
Arjun:
Hello Everyone,
Milestone: Dec 24, Paper’s Internal Deadline
I am following up on the action items from the past week:
The target date for the action items listed above was today to keep in track with the milestone; however, I was pretty busy this weekend with TA duties. I will follow up on these action items and their results via Slack, by tomorrow evening.
Replies:
Dr. Lei:
I would suggest you take off “possibly”. mindset matters. things happen only with commitment.
Arjun:
Hello Everyone,
Milestone: Nov 29: Complete Pipeline for current approach along with results
Action Items performed this week:
Project Page: https://ajdahal.github.io/portfolio/
I will be presenting on new idea session today.
Replies:
Dr. Lei:
the project page does not seem to be in a good state. also please summarize the results of your action items, which would be more informative.
Arjun:
Hello Group,
Milestone: Baseline for Fairness Testing | Nov 12 |
Action Item completed this week:
I have been doing some experiments with regards to VAE for generating data points.
Fixed the issue with kl divergence for ‘hours-per-week’ observed in the last meeting.
I have obtained metrics for Reconstruction Loss vs Number of Embedding Dimension.
ATN and Kldivergence for random binning of embedding dimension.
Currently, I am carrying out t-way combination of latent space vectors. I believe I will have results by this afternoon.
Action Items for next week:
I would like to book 15 minutes of time for discussion on the results and the approach.
Arjun:
Hello Group,
Milestone: Nov 12 | Baseline for Fairness Testing |
I am still working on the experiments regarding fairness testing for Variational AutoEncoder.
I will share some results on slack before Nov 12 regarding the process with some metrics.
Replies:
Dr. Lei:
since you are targeting IWCT, whose deadline is fast approaching, it would help you can work out a schedule, with break-down tasks and target dates. the schedule will guide all your efforts toward the paper submission.
Arjun:
Hello Everyone,
Milestone: Baseline for Fairness Testing ( Nov 12 )
I am working on making changes on the existing library of VAE to encorporate the combinatorial sampling in it’s latent space. Furthermore, I am also looking into creating conditional VAE and recording metrics for ATN and KL divergence along the way. I am currently working on the implementation.
I will briefly discuss the findings this Friday.
Arjun:
Hi Group,
Milestone: Baseline for Fairness Testing
I have collected and reviewed the necessary literature for the fairness testing with variational auto-encoder and implemented the necessary code for conditional variational auto-encoder. I would like to signup for a brief discussion tomorrow regarding the review and the results.
Replies:
Dr. Lei:
do you want to put a target date on your milestone?
Arjun:
Hi Group,
Milestone: Finalize the baseline approach Oct 18
I have been working on the following action items since last week:
I have made some progress with the action items above, but haven’t worked till completion primarily due to travel.
I will update my progress everyday till coming Friday to ensure the baseline approach is finalized.
Replies:
Dr. Lei:
try your best to make the milestone
Arjun:
Milestone: Finalize the baseline approach for Fairness testing ( 18 / 08 )
Intermediate Goals:
A. Complete literature search for fairness testing using data samples that reflect realistic changes when sensitive attribute is flipped for fairness testing. ( 11 / 08 )
B. Generation of Data samples conditioned on specific sensitive attribute. ( 11 / 08 )
Action Items completed:
I will present today in the topic of “LLM and it’s application in Tabular Data”.
Replies:
Dr. Lei:
sounds good. i am very interested in hearing what you found out from your literature review first, in your presentation today. also what are the specific issues with conditional VAE?
Arjun:
Thank You Dr. Lei for your feedback. I will be able to complete the literature review by Friday. Furthermore, I will sample few datapoints from the CTVAE and present some results this Friday.
To ensure, I make incremental progress to the task, I will update the status on channel related to my research Wednesday evening.
Arjun:
Hi Group,
I have been following up on action items discussed in the last meeting. I will share some findings regarding LLM producing samples of tabular data for different prompts given to it which will take around ~15 mins. I plan to continue on the action items (sampling in latent space of VAE, generating adversarial examples on top of it ) and present the results in upcoming meetings.
My next milestone would be: Sample instances in latent space of VAE, generate adversarial examples on those samples to produce discriminatory samples and analyze results in each step (Oct 4 ).
Replies:
Dr. Lei:
this is not a good status updates, as I cannot get useful information out of it.
Arjun:
Hi Group,
I have been looking into the literature regarding Fairness and Clustering since the last meeting
( shared in the channel arjun-research ). Creating Baseline approach with the findings from last few weeks is taking little more time than expected.
I intend to present the summary of findings, approach and summary of relevant literature in the upcoming meeting.
Arjun:
Hello All,
Milestone: Present on literature of all existing work on Fairness testing with clustering ( sep 24 )
Action items for the week: I have been following up on action items ( Divide the data into further sub-clusters to get discriminatory and non-discriminatory clusters, Inspect the datapoints that were discriminatory and non-discriminatory in labels as well as prediction ) discussed in last meeting but haven’t been able to make much progress as I amn’t very well. I will present some findings in upcoming meeting on Friday.
Replies:
Dr. Lei:
it is probably time to look back at what you have found out, and then start on developing your own approach. sign up a technical discussion to summarize your findings and propose some possible ideas for your own approach.
Arjun:
Hi All,
Milestone:
Improve upon the baseline approach
I have been investigating the clusters that have all the discriminatory samples.
I have some findings that I would like to share in tomorrow’s meeting ( ~15-20 mins ).
Replies:
Dr. Lei:
try to make a presentation on the major approaches to fairness testing, especially on any existing work based on clustering
© 2023 Jeff Lei's Lab.
Site made with Jekyll. Template from the Allan Lab
We are part of the CSE Department at University of Texas at Arlington.